Dec  9 04:42:58 np0005551604 kernel: Linux version 5.14.0-648.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 5 11:18:23 UTC 2025
Dec  9 04:42:58 np0005551604 kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Dec  9 04:42:58 np0005551604 kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  9 04:42:58 np0005551604 kernel: BIOS-provided physical RAM map:
Dec  9 04:42:58 np0005551604 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Dec  9 04:42:58 np0005551604 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Dec  9 04:42:58 np0005551604 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Dec  9 04:42:58 np0005551604 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Dec  9 04:42:58 np0005551604 kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Dec  9 04:42:58 np0005551604 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Dec  9 04:42:58 np0005551604 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Dec  9 04:42:58 np0005551604 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Dec  9 04:42:58 np0005551604 kernel: NX (Execute Disable) protection: active
Dec  9 04:42:58 np0005551604 kernel: APIC: Static calls initialized
Dec  9 04:42:58 np0005551604 kernel: SMBIOS 2.8 present.
Dec  9 04:42:58 np0005551604 kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Dec  9 04:42:58 np0005551604 kernel: Hypervisor detected: KVM
Dec  9 04:42:58 np0005551604 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Dec  9 04:42:58 np0005551604 kernel: kvm-clock: using sched offset of 3836543039 cycles
Dec  9 04:42:58 np0005551604 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Dec  9 04:42:58 np0005551604 kernel: tsc: Detected 2799.998 MHz processor
Dec  9 04:42:58 np0005551604 kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Dec  9 04:42:58 np0005551604 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Dec  9 04:42:58 np0005551604 kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Dec  9 04:42:58 np0005551604 kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Dec  9 04:42:58 np0005551604 kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Dec  9 04:42:58 np0005551604 kernel: Using GB pages for direct mapping
Dec  9 04:42:58 np0005551604 kernel: RAMDISK: [mem 0x2e955000-0x334a2fff]
Dec  9 04:42:58 np0005551604 kernel: ACPI: Early table checksum verification disabled
Dec  9 04:42:58 np0005551604 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Dec  9 04:42:58 np0005551604 kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  9 04:42:58 np0005551604 kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  9 04:42:58 np0005551604 kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  9 04:42:58 np0005551604 kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Dec  9 04:42:58 np0005551604 kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  9 04:42:58 np0005551604 kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Dec  9 04:42:58 np0005551604 kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Dec  9 04:42:58 np0005551604 kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Dec  9 04:42:58 np0005551604 kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Dec  9 04:42:58 np0005551604 kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Dec  9 04:42:58 np0005551604 kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Dec  9 04:42:58 np0005551604 kernel: No NUMA configuration found
Dec  9 04:42:58 np0005551604 kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Dec  9 04:42:58 np0005551604 kernel: NODE_DATA(0) allocated [mem 0x23ffd5000-0x23fffffff]
Dec  9 04:42:58 np0005551604 kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Dec  9 04:42:58 np0005551604 kernel: Zone ranges:
Dec  9 04:42:58 np0005551604 kernel:  DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Dec  9 04:42:58 np0005551604 kernel:  DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Dec  9 04:42:58 np0005551604 kernel:  Normal   [mem 0x0000000100000000-0x000000023fffffff]
Dec  9 04:42:58 np0005551604 kernel:  Device   empty
Dec  9 04:42:58 np0005551604 kernel: Movable zone start for each node
Dec  9 04:42:58 np0005551604 kernel: Early memory node ranges
Dec  9 04:42:58 np0005551604 kernel:  node   0: [mem 0x0000000000001000-0x000000000009efff]
Dec  9 04:42:58 np0005551604 kernel:  node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Dec  9 04:42:58 np0005551604 kernel:  node   0: [mem 0x0000000100000000-0x000000023fffffff]
Dec  9 04:42:58 np0005551604 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Dec  9 04:42:58 np0005551604 kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Dec  9 04:42:58 np0005551604 kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Dec  9 04:42:58 np0005551604 kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Dec  9 04:42:58 np0005551604 kernel: ACPI: PM-Timer IO Port: 0x608
Dec  9 04:42:58 np0005551604 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Dec  9 04:42:58 np0005551604 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Dec  9 04:42:58 np0005551604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Dec  9 04:42:58 np0005551604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Dec  9 04:42:58 np0005551604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Dec  9 04:42:58 np0005551604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Dec  9 04:42:58 np0005551604 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Dec  9 04:42:58 np0005551604 kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Dec  9 04:42:58 np0005551604 kernel: TSC deadline timer available
Dec  9 04:42:58 np0005551604 kernel: CPU topo: Max. logical packages:   8
Dec  9 04:42:58 np0005551604 kernel: CPU topo: Max. logical dies:       8
Dec  9 04:42:58 np0005551604 kernel: CPU topo: Max. dies per package:   1
Dec  9 04:42:58 np0005551604 kernel: CPU topo: Max. threads per core:   1
Dec  9 04:42:58 np0005551604 kernel: CPU topo: Num. cores per package:     1
Dec  9 04:42:58 np0005551604 kernel: CPU topo: Num. threads per package:   1
Dec  9 04:42:58 np0005551604 kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Dec  9 04:42:58 np0005551604 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Dec  9 04:42:58 np0005551604 kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Dec  9 04:42:58 np0005551604 kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Dec  9 04:42:58 np0005551604 kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Dec  9 04:42:58 np0005551604 kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Dec  9 04:42:58 np0005551604 kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Dec  9 04:42:58 np0005551604 kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Dec  9 04:42:58 np0005551604 kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Dec  9 04:42:58 np0005551604 kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Dec  9 04:42:58 np0005551604 kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Dec  9 04:42:58 np0005551604 kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Dec  9 04:42:58 np0005551604 kernel: Booting paravirtualized kernel on KVM
Dec  9 04:42:58 np0005551604 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Dec  9 04:42:58 np0005551604 kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Dec  9 04:42:58 np0005551604 kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Dec  9 04:42:58 np0005551604 kernel: kvm-guest: PV spinlocks disabled, no host support
Dec  9 04:42:58 np0005551604 kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  9 04:42:58 np0005551604 kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64", will be passed to user space.
Dec  9 04:42:58 np0005551604 kernel: random: crng init done
Dec  9 04:42:58 np0005551604 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: Fallback order for Node 0: 0 
Dec  9 04:42:58 np0005551604 kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Dec  9 04:42:58 np0005551604 kernel: Policy zone: Normal
Dec  9 04:42:58 np0005551604 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Dec  9 04:42:58 np0005551604 kernel: software IO TLB: area num 8.
Dec  9 04:42:58 np0005551604 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Dec  9 04:42:58 np0005551604 kernel: ftrace: allocating 49357 entries in 193 pages
Dec  9 04:42:58 np0005551604 kernel: ftrace: allocated 193 pages with 3 groups
Dec  9 04:42:58 np0005551604 kernel: Dynamic Preempt: voluntary
Dec  9 04:42:58 np0005551604 kernel: rcu: Preemptible hierarchical RCU implementation.
Dec  9 04:42:58 np0005551604 kernel: rcu: #011RCU event tracing is enabled.
Dec  9 04:42:58 np0005551604 kernel: rcu: #011RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Dec  9 04:42:58 np0005551604 kernel: #011Trampoline variant of Tasks RCU enabled.
Dec  9 04:42:58 np0005551604 kernel: #011Rude variant of Tasks RCU enabled.
Dec  9 04:42:58 np0005551604 kernel: #011Tracing variant of Tasks RCU enabled.
Dec  9 04:42:58 np0005551604 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Dec  9 04:42:58 np0005551604 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Dec  9 04:42:58 np0005551604 kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  9 04:42:58 np0005551604 kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  9 04:42:58 np0005551604 kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Dec  9 04:42:58 np0005551604 kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Dec  9 04:42:58 np0005551604 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Dec  9 04:42:58 np0005551604 kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Dec  9 04:42:58 np0005551604 kernel: Console: colour VGA+ 80x25
Dec  9 04:42:58 np0005551604 kernel: printk: console [ttyS0] enabled
Dec  9 04:42:58 np0005551604 kernel: ACPI: Core revision 20230331
Dec  9 04:42:58 np0005551604 kernel: APIC: Switch to symmetric I/O mode setup
Dec  9 04:42:58 np0005551604 kernel: x2apic enabled
Dec  9 04:42:58 np0005551604 kernel: APIC: Switched APIC routing to: physical x2apic
Dec  9 04:42:58 np0005551604 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Dec  9 04:42:58 np0005551604 kernel: Calibrating delay loop (skipped) preset value.. 5599.99 BogoMIPS (lpj=2799998)
Dec  9 04:42:58 np0005551604 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Dec  9 04:42:58 np0005551604 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Dec  9 04:42:58 np0005551604 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Dec  9 04:42:58 np0005551604 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Dec  9 04:42:58 np0005551604 kernel: Spectre V2 : Mitigation: Retpolines
Dec  9 04:42:58 np0005551604 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Dec  9 04:42:58 np0005551604 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Dec  9 04:42:58 np0005551604 kernel: RETBleed: Mitigation: untrained return thunk
Dec  9 04:42:58 np0005551604 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Dec  9 04:42:58 np0005551604 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Dec  9 04:42:58 np0005551604 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Dec  9 04:42:58 np0005551604 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Dec  9 04:42:58 np0005551604 kernel: x86/bugs: return thunk changed
Dec  9 04:42:58 np0005551604 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Dec  9 04:42:58 np0005551604 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Dec  9 04:42:58 np0005551604 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Dec  9 04:42:58 np0005551604 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Dec  9 04:42:58 np0005551604 kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Dec  9 04:42:58 np0005551604 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Dec  9 04:42:58 np0005551604 kernel: Freeing SMP alternatives memory: 40K
Dec  9 04:42:58 np0005551604 kernel: pid_max: default: 32768 minimum: 301
Dec  9 04:42:58 np0005551604 kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Dec  9 04:42:58 np0005551604 kernel: landlock: Up and running.
Dec  9 04:42:58 np0005551604 kernel: Yama: becoming mindful.
Dec  9 04:42:58 np0005551604 kernel: SELinux:  Initializing.
Dec  9 04:42:58 np0005551604 kernel: LSM support for eBPF active
Dec  9 04:42:58 np0005551604 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Dec  9 04:42:58 np0005551604 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Dec  9 04:42:58 np0005551604 kernel: ... version:                0
Dec  9 04:42:58 np0005551604 kernel: ... bit width:              48
Dec  9 04:42:58 np0005551604 kernel: ... generic registers:      6
Dec  9 04:42:58 np0005551604 kernel: ... value mask:             0000ffffffffffff
Dec  9 04:42:58 np0005551604 kernel: ... max period:             00007fffffffffff
Dec  9 04:42:58 np0005551604 kernel: ... fixed-purpose events:   0
Dec  9 04:42:58 np0005551604 kernel: ... event mask:             000000000000003f
Dec  9 04:42:58 np0005551604 kernel: signal: max sigframe size: 1776
Dec  9 04:42:58 np0005551604 kernel: rcu: Hierarchical SRCU implementation.
Dec  9 04:42:58 np0005551604 kernel: rcu: #011Max phase no-delay instances is 400.
Dec  9 04:42:58 np0005551604 kernel: smp: Bringing up secondary CPUs ...
Dec  9 04:42:58 np0005551604 kernel: smpboot: x86: Booting SMP configuration:
Dec  9 04:42:58 np0005551604 kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Dec  9 04:42:58 np0005551604 kernel: smp: Brought up 1 node, 8 CPUs
Dec  9 04:42:58 np0005551604 kernel: smpboot: Total of 8 processors activated (44799.96 BogoMIPS)
Dec  9 04:42:58 np0005551604 kernel: node 0 deferred pages initialised in 33ms
Dec  9 04:42:58 np0005551604 kernel: Memory: 7774652K/8388068K available (16384K kernel code, 5795K rwdata, 13916K rodata, 4192K init, 7164K bss, 607516K reserved, 0K cma-reserved)
Dec  9 04:42:58 np0005551604 kernel: devtmpfs: initialized
Dec  9 04:42:58 np0005551604 kernel: x86/mm: Memory block size: 128MB
Dec  9 04:42:58 np0005551604 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Dec  9 04:42:58 np0005551604 kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Dec  9 04:42:58 np0005551604 kernel: pinctrl core: initialized pinctrl subsystem
Dec  9 04:42:58 np0005551604 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Dec  9 04:42:58 np0005551604 kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Dec  9 04:42:58 np0005551604 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Dec  9 04:42:58 np0005551604 kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Dec  9 04:42:58 np0005551604 kernel: audit: initializing netlink subsys (disabled)
Dec  9 04:42:58 np0005551604 kernel: audit: type=2000 audit(1765273375.928:1): state=initialized audit_enabled=0 res=1
Dec  9 04:42:58 np0005551604 kernel: thermal_sys: Registered thermal governor 'fair_share'
Dec  9 04:42:58 np0005551604 kernel: thermal_sys: Registered thermal governor 'step_wise'
Dec  9 04:42:58 np0005551604 kernel: thermal_sys: Registered thermal governor 'user_space'
Dec  9 04:42:58 np0005551604 kernel: cpuidle: using governor menu
Dec  9 04:42:58 np0005551604 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Dec  9 04:42:58 np0005551604 kernel: PCI: Using configuration type 1 for base access
Dec  9 04:42:58 np0005551604 kernel: PCI: Using configuration type 1 for extended access
Dec  9 04:42:58 np0005551604 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Dec  9 04:42:58 np0005551604 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Dec  9 04:42:58 np0005551604 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Dec  9 04:42:58 np0005551604 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Dec  9 04:42:58 np0005551604 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Dec  9 04:42:58 np0005551604 kernel: Demotion targets for Node 0: null
Dec  9 04:42:58 np0005551604 kernel: cryptd: max_cpu_qlen set to 1000
Dec  9 04:42:58 np0005551604 kernel: ACPI: Added _OSI(Module Device)
Dec  9 04:42:58 np0005551604 kernel: ACPI: Added _OSI(Processor Device)
Dec  9 04:42:58 np0005551604 kernel: ACPI: Added _OSI(3.0 _SCP Extensions)
Dec  9 04:42:58 np0005551604 kernel: ACPI: Added _OSI(Processor Aggregator Device)
Dec  9 04:42:58 np0005551604 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Dec  9 04:42:58 np0005551604 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC
Dec  9 04:42:58 np0005551604 kernel: ACPI: Interpreter enabled
Dec  9 04:42:58 np0005551604 kernel: ACPI: PM: (supports S0 S3 S4 S5)
Dec  9 04:42:58 np0005551604 kernel: ACPI: Using IOAPIC for interrupt routing
Dec  9 04:42:58 np0005551604 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Dec  9 04:42:58 np0005551604 kernel: PCI: Using E820 reservations for host bridge windows
Dec  9 04:42:58 np0005551604 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Dec  9 04:42:58 np0005551604 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Dec  9 04:42:58 np0005551604 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [3] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [4] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [5] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [6] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [7] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [8] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [9] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [10] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [11] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [12] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [13] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [14] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [15] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [16] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [17] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [18] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [19] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [20] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [21] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [22] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [23] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [24] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [25] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [26] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [27] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [28] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [29] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [30] registered
Dec  9 04:42:58 np0005551604 kernel: acpiphp: Slot [31] registered
Dec  9 04:42:58 np0005551604 kernel: PCI host bridge to bus 0000:00
Dec  9 04:42:58 np0005551604 kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Dec  9 04:42:58 np0005551604 kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Dec  9 04:42:58 np0005551604 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Dec  9 04:42:58 np0005551604 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Dec  9 04:42:58 np0005551604 kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Dec  9 04:42:58 np0005551604 kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Dec  9 04:42:58 np0005551604 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Dec  9 04:42:58 np0005551604 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Dec  9 04:42:58 np0005551604 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Dec  9 04:42:58 np0005551604 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Dec  9 04:42:58 np0005551604 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Dec  9 04:42:58 np0005551604 kernel: iommu: Default domain type: Translated
Dec  9 04:42:58 np0005551604 kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Dec  9 04:42:58 np0005551604 kernel: SCSI subsystem initialized
Dec  9 04:42:58 np0005551604 kernel: ACPI: bus type USB registered
Dec  9 04:42:58 np0005551604 kernel: usbcore: registered new interface driver usbfs
Dec  9 04:42:58 np0005551604 kernel: usbcore: registered new interface driver hub
Dec  9 04:42:58 np0005551604 kernel: usbcore: registered new device driver usb
Dec  9 04:42:58 np0005551604 kernel: pps_core: LinuxPPS API ver. 1 registered
Dec  9 04:42:58 np0005551604 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Dec  9 04:42:58 np0005551604 kernel: PTP clock support registered
Dec  9 04:42:58 np0005551604 kernel: EDAC MC: Ver: 3.0.0
Dec  9 04:42:58 np0005551604 kernel: NetLabel: Initializing
Dec  9 04:42:58 np0005551604 kernel: NetLabel:  domain hash size = 128
Dec  9 04:42:58 np0005551604 kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Dec  9 04:42:58 np0005551604 kernel: NetLabel:  unlabeled traffic allowed by default
Dec  9 04:42:58 np0005551604 kernel: PCI: Using ACPI for IRQ routing
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Dec  9 04:42:58 np0005551604 kernel: vgaarb: loaded
Dec  9 04:42:58 np0005551604 kernel: clocksource: Switched to clocksource kvm-clock
Dec  9 04:42:58 np0005551604 kernel: VFS: Disk quotas dquot_6.6.0
Dec  9 04:42:58 np0005551604 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Dec  9 04:42:58 np0005551604 kernel: pnp: PnP ACPI init
Dec  9 04:42:58 np0005551604 kernel: pnp: PnP ACPI: found 5 devices
Dec  9 04:42:58 np0005551604 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Dec  9 04:42:58 np0005551604 kernel: NET: Registered PF_INET protocol family
Dec  9 04:42:58 np0005551604 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: TCP: Hash tables configured (established 65536 bind 65536)
Dec  9 04:42:58 np0005551604 kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Dec  9 04:42:58 np0005551604 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Dec  9 04:42:58 np0005551604 kernel: NET: Registered PF_XDP protocol family
Dec  9 04:42:58 np0005551604 kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Dec  9 04:42:58 np0005551604 kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Dec  9 04:42:58 np0005551604 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Dec  9 04:42:58 np0005551604 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Dec  9 04:42:58 np0005551604 kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Dec  9 04:42:58 np0005551604 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Dec  9 04:42:58 np0005551604 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 73280 usecs
Dec  9 04:42:58 np0005551604 kernel: PCI: CLS 0 bytes, default 64
Dec  9 04:42:58 np0005551604 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Dec  9 04:42:58 np0005551604 kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Dec  9 04:42:58 np0005551604 kernel: ACPI: bus type thunderbolt registered
Dec  9 04:42:58 np0005551604 kernel: Trying to unpack rootfs image as initramfs...
Dec  9 04:42:58 np0005551604 kernel: Initialise system trusted keyrings
Dec  9 04:42:58 np0005551604 kernel: Key type blacklist registered
Dec  9 04:42:58 np0005551604 kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Dec  9 04:42:58 np0005551604 kernel: zbud: loaded
Dec  9 04:42:58 np0005551604 kernel: integrity: Platform Keyring initialized
Dec  9 04:42:58 np0005551604 kernel: integrity: Machine keyring initialized
Dec  9 04:42:58 np0005551604 kernel: Freeing initrd memory: 77112K
Dec  9 04:42:58 np0005551604 kernel: NET: Registered PF_ALG protocol family
Dec  9 04:42:58 np0005551604 kernel: xor: automatically using best checksumming function   avx       
Dec  9 04:42:58 np0005551604 kernel: Key type asymmetric registered
Dec  9 04:42:58 np0005551604 kernel: Asymmetric key parser 'x509' registered
Dec  9 04:42:58 np0005551604 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Dec  9 04:42:58 np0005551604 kernel: io scheduler mq-deadline registered
Dec  9 04:42:58 np0005551604 kernel: io scheduler kyber registered
Dec  9 04:42:58 np0005551604 kernel: io scheduler bfq registered
Dec  9 04:42:58 np0005551604 kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Dec  9 04:42:58 np0005551604 kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Dec  9 04:42:58 np0005551604 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Dec  9 04:42:58 np0005551604 kernel: ACPI: button: Power Button [PWRF]
Dec  9 04:42:58 np0005551604 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Dec  9 04:42:58 np0005551604 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Dec  9 04:42:58 np0005551604 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Dec  9 04:42:58 np0005551604 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Dec  9 04:42:58 np0005551604 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Dec  9 04:42:58 np0005551604 kernel: Non-volatile memory driver v1.3
Dec  9 04:42:58 np0005551604 kernel: rdac: device handler registered
Dec  9 04:42:58 np0005551604 kernel: hp_sw: device handler registered
Dec  9 04:42:58 np0005551604 kernel: emc: device handler registered
Dec  9 04:42:58 np0005551604 kernel: alua: device handler registered
Dec  9 04:42:58 np0005551604 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Dec  9 04:42:58 np0005551604 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Dec  9 04:42:58 np0005551604 kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Dec  9 04:42:58 np0005551604 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Dec  9 04:42:58 np0005551604 kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Dec  9 04:42:58 np0005551604 kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Dec  9 04:42:58 np0005551604 kernel: usb usb1: Product: UHCI Host Controller
Dec  9 04:42:58 np0005551604 kernel: usb usb1: Manufacturer: Linux 5.14.0-648.el9.x86_64 uhci_hcd
Dec  9 04:42:58 np0005551604 kernel: usb usb1: SerialNumber: 0000:00:01.2
Dec  9 04:42:58 np0005551604 kernel: hub 1-0:1.0: USB hub found
Dec  9 04:42:58 np0005551604 kernel: hub 1-0:1.0: 2 ports detected
Dec  9 04:42:58 np0005551604 kernel: usbcore: registered new interface driver usbserial_generic
Dec  9 04:42:58 np0005551604 kernel: usbserial: USB Serial support registered for generic
Dec  9 04:42:58 np0005551604 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Dec  9 04:42:58 np0005551604 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Dec  9 04:42:58 np0005551604 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Dec  9 04:42:58 np0005551604 kernel: mousedev: PS/2 mouse device common for all mice
Dec  9 04:42:58 np0005551604 kernel: rtc_cmos 00:04: RTC can wake from S4
Dec  9 04:42:58 np0005551604 kernel: rtc_cmos 00:04: registered as rtc0
Dec  9 04:42:58 np0005551604 kernel: rtc_cmos 00:04: setting system clock to 2025-12-09T09:42:57 UTC (1765273377)
Dec  9 04:42:58 np0005551604 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Dec  9 04:42:58 np0005551604 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Dec  9 04:42:58 np0005551604 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Dec  9 04:42:58 np0005551604 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Dec  9 04:42:58 np0005551604 kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Dec  9 04:42:58 np0005551604 kernel: hid: raw HID events driver (C) Jiri Kosina
Dec  9 04:42:58 np0005551604 kernel: usbcore: registered new interface driver usbhid
Dec  9 04:42:58 np0005551604 kernel: usbhid: USB HID core driver
Dec  9 04:42:58 np0005551604 kernel: drop_monitor: Initializing network drop monitor service
Dec  9 04:42:58 np0005551604 kernel: Initializing XFRM netlink socket
Dec  9 04:42:58 np0005551604 kernel: NET: Registered PF_INET6 protocol family
Dec  9 04:42:58 np0005551604 kernel: Segment Routing with IPv6
Dec  9 04:42:58 np0005551604 kernel: NET: Registered PF_PACKET protocol family
Dec  9 04:42:58 np0005551604 kernel: mpls_gso: MPLS GSO support
Dec  9 04:42:58 np0005551604 kernel: IPI shorthand broadcast: enabled
Dec  9 04:42:58 np0005551604 kernel: AVX2 version of gcm_enc/dec engaged.
Dec  9 04:42:58 np0005551604 kernel: AES CTR mode by8 optimization enabled
Dec  9 04:42:58 np0005551604 kernel: sched_clock: Marking stable (2391005545, 153778355)->(2859927131, -315143231)
Dec  9 04:42:58 np0005551604 kernel: registered taskstats version 1
Dec  9 04:42:58 np0005551604 kernel: Loading compiled-in X.509 certificates
Dec  9 04:42:58 np0005551604 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec  9 04:42:58 np0005551604 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Dec  9 04:42:58 np0005551604 kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Dec  9 04:42:58 np0005551604 kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Dec  9 04:42:58 np0005551604 kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Dec  9 04:42:58 np0005551604 kernel: Demotion targets for Node 0: null
Dec  9 04:42:58 np0005551604 kernel: page_owner is disabled
Dec  9 04:42:58 np0005551604 kernel: Key type .fscrypt registered
Dec  9 04:42:58 np0005551604 kernel: Key type fscrypt-provisioning registered
Dec  9 04:42:58 np0005551604 kernel: Key type big_key registered
Dec  9 04:42:58 np0005551604 kernel: Key type encrypted registered
Dec  9 04:42:58 np0005551604 kernel: ima: No TPM chip found, activating TPM-bypass!
Dec  9 04:42:58 np0005551604 kernel: Loading compiled-in module X.509 certificates
Dec  9 04:42:58 np0005551604 kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: bcc7fcdcfd9be61e8634554e9f7a1c01f32489d8'
Dec  9 04:42:58 np0005551604 kernel: ima: Allocated hash algorithm: sha256
Dec  9 04:42:58 np0005551604 kernel: ima: No architecture policies found
Dec  9 04:42:58 np0005551604 kernel: evm: Initialising EVM extended attributes:
Dec  9 04:42:58 np0005551604 kernel: evm: security.selinux
Dec  9 04:42:58 np0005551604 kernel: evm: security.SMACK64 (disabled)
Dec  9 04:42:58 np0005551604 kernel: evm: security.SMACK64EXEC (disabled)
Dec  9 04:42:58 np0005551604 kernel: evm: security.SMACK64TRANSMUTE (disabled)
Dec  9 04:42:58 np0005551604 kernel: evm: security.SMACK64MMAP (disabled)
Dec  9 04:42:58 np0005551604 kernel: evm: security.apparmor (disabled)
Dec  9 04:42:58 np0005551604 kernel: evm: security.ima
Dec  9 04:42:58 np0005551604 kernel: evm: security.capability
Dec  9 04:42:58 np0005551604 kernel: evm: HMAC attrs: 0x1
Dec  9 04:42:58 np0005551604 kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Dec  9 04:42:58 np0005551604 kernel: Running certificate verification RSA selftest
Dec  9 04:42:58 np0005551604 kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Dec  9 04:42:58 np0005551604 kernel: Running certificate verification ECDSA selftest
Dec  9 04:42:58 np0005551604 kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Dec  9 04:42:58 np0005551604 kernel: clk: Disabling unused clocks
Dec  9 04:42:58 np0005551604 kernel: Freeing unused decrypted memory: 2028K
Dec  9 04:42:58 np0005551604 kernel: Freeing unused kernel image (initmem) memory: 4192K
Dec  9 04:42:58 np0005551604 kernel: Write protecting the kernel read-only data: 30720k
Dec  9 04:42:58 np0005551604 kernel: Freeing unused kernel image (rodata/data gap) memory: 420K
Dec  9 04:42:58 np0005551604 kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Dec  9 04:42:58 np0005551604 kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Dec  9 04:42:58 np0005551604 kernel: usb 1-1: Product: QEMU USB Tablet
Dec  9 04:42:58 np0005551604 kernel: usb 1-1: Manufacturer: QEMU
Dec  9 04:42:58 np0005551604 kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Dec  9 04:42:58 np0005551604 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Dec  9 04:42:58 np0005551604 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Dec  9 04:42:58 np0005551604 kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Dec  9 04:42:58 np0005551604 kernel: Run /init as init process
Dec  9 04:42:58 np0005551604 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  9 04:42:58 np0005551604 systemd: Detected virtualization kvm.
Dec  9 04:42:58 np0005551604 systemd: Detected architecture x86-64.
Dec  9 04:42:58 np0005551604 systemd: Running in initrd.
Dec  9 04:42:58 np0005551604 systemd: No hostname configured, using default hostname.
Dec  9 04:42:58 np0005551604 systemd: Hostname set to <localhost>.
Dec  9 04:42:58 np0005551604 systemd: Initializing machine ID from VM UUID.
Dec  9 04:42:58 np0005551604 systemd: Queued start job for default target Initrd Default Target.
Dec  9 04:42:58 np0005551604 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  9 04:42:58 np0005551604 systemd: Reached target Local Encrypted Volumes.
Dec  9 04:42:58 np0005551604 systemd: Reached target Initrd /usr File System.
Dec  9 04:42:58 np0005551604 systemd: Reached target Local File Systems.
Dec  9 04:42:58 np0005551604 systemd: Reached target Path Units.
Dec  9 04:42:58 np0005551604 systemd: Reached target Slice Units.
Dec  9 04:42:58 np0005551604 systemd: Reached target Swaps.
Dec  9 04:42:58 np0005551604 systemd: Reached target Timer Units.
Dec  9 04:42:58 np0005551604 systemd: Listening on D-Bus System Message Bus Socket.
Dec  9 04:42:58 np0005551604 systemd: Listening on Journal Socket (/dev/log).
Dec  9 04:42:58 np0005551604 systemd: Listening on Journal Socket.
Dec  9 04:42:58 np0005551604 systemd: Listening on udev Control Socket.
Dec  9 04:42:58 np0005551604 systemd: Listening on udev Kernel Socket.
Dec  9 04:42:58 np0005551604 systemd: Reached target Socket Units.
Dec  9 04:42:58 np0005551604 systemd: Starting Create List of Static Device Nodes...
Dec  9 04:42:58 np0005551604 systemd: Starting Journal Service...
Dec  9 04:42:58 np0005551604 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  9 04:42:58 np0005551604 systemd: Starting Apply Kernel Variables...
Dec  9 04:42:58 np0005551604 systemd: Starting Create System Users...
Dec  9 04:42:58 np0005551604 systemd: Starting Setup Virtual Console...
Dec  9 04:42:58 np0005551604 systemd: Finished Create List of Static Device Nodes.
Dec  9 04:42:58 np0005551604 systemd: Finished Apply Kernel Variables.
Dec  9 04:42:58 np0005551604 systemd: Finished Create System Users.
Dec  9 04:42:58 np0005551604 systemd-journald[306]: Journal started
Dec  9 04:42:58 np0005551604 systemd-journald[306]: Runtime Journal (/run/log/journal/6aaf51230bdb461d92bbb40c4bea282b) is 8.0M, max 153.6M, 145.6M free.
Dec  9 04:42:58 np0005551604 systemd-sysusers[310]: Creating group 'users' with GID 100.
Dec  9 04:42:58 np0005551604 systemd-sysusers[310]: Creating group 'dbus' with GID 81.
Dec  9 04:42:58 np0005551604 systemd-sysusers[310]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Dec  9 04:42:58 np0005551604 systemd: Started Journal Service.
Dec  9 04:42:58 np0005551604 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  9 04:42:58 np0005551604 systemd[1]: Starting Create Volatile Files and Directories...
Dec  9 04:42:58 np0005551604 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  9 04:42:58 np0005551604 systemd[1]: Finished Create Volatile Files and Directories.
Dec  9 04:42:58 np0005551604 systemd[1]: Finished Setup Virtual Console.
Dec  9 04:42:58 np0005551604 systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Dec  9 04:42:58 np0005551604 systemd[1]: Starting dracut cmdline hook...
Dec  9 04:42:58 np0005551604 dracut-cmdline[326]: dracut-9 dracut-057-102.git20250818.el9
Dec  9 04:42:58 np0005551604 dracut-cmdline[326]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-648.el9.x86_64 root=UUID=fcf6b761-831a-48a7-9f5f-068b5063763f ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Dec  9 04:42:58 np0005551604 systemd[1]: Finished dracut cmdline hook.
Dec  9 04:42:58 np0005551604 systemd[1]: Starting dracut pre-udev hook...
Dec  9 04:42:58 np0005551604 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Dec  9 04:42:58 np0005551604 kernel: device-mapper: uevent: version 1.0.3
Dec  9 04:42:58 np0005551604 kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Dec  9 04:42:58 np0005551604 kernel: RPC: Registered named UNIX socket transport module.
Dec  9 04:42:58 np0005551604 kernel: RPC: Registered udp transport module.
Dec  9 04:42:58 np0005551604 kernel: RPC: Registered tcp transport module.
Dec  9 04:42:58 np0005551604 kernel: RPC: Registered tcp-with-tls transport module.
Dec  9 04:42:58 np0005551604 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Dec  9 04:42:59 np0005551604 rpc.statd[443]: Version 2.5.4 starting
Dec  9 04:42:59 np0005551604 rpc.statd[443]: Initializing NSM state
Dec  9 04:42:59 np0005551604 rpc.idmapd[448]: Setting log level to 0
Dec  9 04:42:59 np0005551604 systemd[1]: Finished dracut pre-udev hook.
Dec  9 04:42:59 np0005551604 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  9 04:42:59 np0005551604 systemd-udevd[461]: Using default interface naming scheme 'rhel-9.0'.
Dec  9 04:42:59 np0005551604 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  9 04:42:59 np0005551604 systemd[1]: Starting dracut pre-trigger hook...
Dec  9 04:42:59 np0005551604 systemd[1]: Finished dracut pre-trigger hook.
Dec  9 04:42:59 np0005551604 systemd[1]: Starting Coldplug All udev Devices...
Dec  9 04:42:59 np0005551604 systemd[1]: Created slice Slice /system/modprobe.
Dec  9 04:42:59 np0005551604 systemd[1]: Starting Load Kernel Module configfs...
Dec  9 04:42:59 np0005551604 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  9 04:42:59 np0005551604 systemd[1]: Finished Load Kernel Module configfs.
Dec  9 04:42:59 np0005551604 systemd[1]: Finished Coldplug All udev Devices.
Dec  9 04:42:59 np0005551604 systemd[1]: Mounting Kernel Configuration File System...
Dec  9 04:42:59 np0005551604 systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  9 04:42:59 np0005551604 systemd[1]: Reached target Network.
Dec  9 04:42:59 np0005551604 systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Dec  9 04:42:59 np0005551604 systemd[1]: Starting dracut initqueue hook...
Dec  9 04:42:59 np0005551604 systemd[1]: Mounted Kernel Configuration File System.
Dec  9 04:42:59 np0005551604 systemd[1]: Reached target System Initialization.
Dec  9 04:42:59 np0005551604 systemd[1]: Reached target Basic System.
Dec  9 04:42:59 np0005551604 kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Dec  9 04:42:59 np0005551604 systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 04:42:59 np0005551604 kernel: scsi host0: ata_piix
Dec  9 04:42:59 np0005551604 kernel: scsi host1: ata_piix
Dec  9 04:42:59 np0005551604 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Dec  9 04:42:59 np0005551604 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Dec  9 04:42:59 np0005551604 kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Dec  9 04:42:59 np0005551604 kernel: vda: vda1
Dec  9 04:42:59 np0005551604 kernel: ata1: found unknown device (class 0)
Dec  9 04:42:59 np0005551604 kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Dec  9 04:42:59 np0005551604 kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Dec  9 04:42:59 np0005551604 kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Dec  9 04:42:59 np0005551604 systemd[1]: Found device /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  9 04:42:59 np0005551604 systemd[1]: Reached target Initrd Root Device.
Dec  9 04:42:59 np0005551604 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Dec  9 04:42:59 np0005551604 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Dec  9 04:42:59 np0005551604 systemd[1]: Finished dracut initqueue hook.
Dec  9 04:42:59 np0005551604 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  9 04:42:59 np0005551604 systemd[1]: Reached target Remote Encrypted Volumes.
Dec  9 04:42:59 np0005551604 systemd[1]: Reached target Remote File Systems.
Dec  9 04:42:59 np0005551604 systemd[1]: Starting dracut pre-mount hook...
Dec  9 04:42:59 np0005551604 systemd[1]: Finished dracut pre-mount hook.
Dec  9 04:42:59 np0005551604 systemd[1]: Starting File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f...
Dec  9 04:42:59 np0005551604 systemd-fsck[556]: /usr/sbin/fsck.xfs: XFS file system.
Dec  9 04:42:59 np0005551604 systemd[1]: Finished File System Check on /dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f.
Dec  9 04:42:59 np0005551604 systemd[1]: Mounting /sysroot...
Dec  9 04:43:00 np0005551604 kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Dec  9 04:43:00 np0005551604 kernel: XFS (vda1): Mounting V5 Filesystem fcf6b761-831a-48a7-9f5f-068b5063763f
Dec  9 04:43:00 np0005551604 kernel: XFS (vda1): Ending clean mount
Dec  9 04:43:00 np0005551604 systemd[1]: Mounted /sysroot.
Dec  9 04:43:00 np0005551604 systemd[1]: Reached target Initrd Root File System.
Dec  9 04:43:00 np0005551604 systemd[1]: Starting Mountpoints Configured in the Real Root...
Dec  9 04:43:00 np0005551604 systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Finished Mountpoints Configured in the Real Root.
Dec  9 04:43:00 np0005551604 systemd[1]: Reached target Initrd File Systems.
Dec  9 04:43:00 np0005551604 systemd[1]: Reached target Initrd Default Target.
Dec  9 04:43:00 np0005551604 systemd[1]: Starting dracut mount hook...
Dec  9 04:43:00 np0005551604 systemd[1]: Finished dracut mount hook.
Dec  9 04:43:00 np0005551604 systemd[1]: Starting dracut pre-pivot and cleanup hook...
Dec  9 04:43:00 np0005551604 rpc.idmapd[448]: exiting on signal 15
Dec  9 04:43:00 np0005551604 systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Finished dracut pre-pivot and cleanup hook.
Dec  9 04:43:00 np0005551604 systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Network.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Remote Encrypted Volumes.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Timer Units.
Dec  9 04:43:00 np0005551604 systemd[1]: dbus.socket: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Closed D-Bus System Message Bus Socket.
Dec  9 04:43:00 np0005551604 systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Initrd Default Target.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Basic System.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Initrd Root Device.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Initrd /usr File System.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Path Units.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Remote File Systems.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Preparation for Remote File Systems.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Slice Units.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Socket Units.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target System Initialization.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Local File Systems.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Swaps.
Dec  9 04:43:00 np0005551604 systemd[1]: dracut-mount.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped dracut mount hook.
Dec  9 04:43:00 np0005551604 systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped dracut pre-mount hook.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped target Local Encrypted Volumes.
Dec  9 04:43:00 np0005551604 systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Dec  9 04:43:00 np0005551604 systemd[1]: dracut-initqueue.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped dracut initqueue hook.
Dec  9 04:43:00 np0005551604 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped Apply Kernel Variables.
Dec  9 04:43:00 np0005551604 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped Create Volatile Files and Directories.
Dec  9 04:43:00 np0005551604 systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped Coldplug All udev Devices.
Dec  9 04:43:00 np0005551604 systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped dracut pre-trigger hook.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Dec  9 04:43:00 np0005551604 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped Setup Virtual Console.
Dec  9 04:43:00 np0005551604 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: systemd-udevd.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Dec  9 04:43:00 np0005551604 systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Closed udev Control Socket.
Dec  9 04:43:00 np0005551604 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Closed udev Kernel Socket.
Dec  9 04:43:00 np0005551604 systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped dracut pre-udev hook.
Dec  9 04:43:00 np0005551604 systemd[1]: dracut-cmdline.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped dracut cmdline hook.
Dec  9 04:43:00 np0005551604 systemd[1]: Starting Cleanup udev Database...
Dec  9 04:43:00 np0005551604 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped Create Static Device Nodes in /dev.
Dec  9 04:43:00 np0005551604 systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped Create List of Static Device Nodes.
Dec  9 04:43:00 np0005551604 systemd[1]: systemd-sysusers.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Stopped Create System Users.
Dec  9 04:43:00 np0005551604 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: initrd-cleanup.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Dec  9 04:43:00 np0005551604 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Dec  9 04:43:00 np0005551604 systemd[1]: Finished Cleanup udev Database.
Dec  9 04:43:00 np0005551604 systemd[1]: Reached target Switch Root.
Dec  9 04:43:00 np0005551604 systemd[1]: Starting Switch Root...
Dec  9 04:43:00 np0005551604 systemd[1]: Switching root.
Dec  9 04:43:00 np0005551604 systemd-journald[306]: Received SIGTERM from PID 1 (systemd).
Dec  9 04:43:00 np0005551604 systemd-journald[306]: Journal stopped
Dec  9 04:43:01 np0005551604 kernel: audit: type=1404 audit(1765273381.005:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Dec  9 04:43:01 np0005551604 kernel: SELinux:  policy capability network_peer_controls=1
Dec  9 04:43:01 np0005551604 kernel: SELinux:  policy capability open_perms=1
Dec  9 04:43:01 np0005551604 kernel: SELinux:  policy capability extended_socket_class=1
Dec  9 04:43:01 np0005551604 kernel: SELinux:  policy capability always_check_network=0
Dec  9 04:43:01 np0005551604 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  9 04:43:01 np0005551604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  9 04:43:01 np0005551604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  9 04:43:01 np0005551604 kernel: audit: type=1403 audit(1765273381.130:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Dec  9 04:43:01 np0005551604 systemd: Successfully loaded SELinux policy in 127.382ms.
Dec  9 04:43:01 np0005551604 systemd: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.873ms.
Dec  9 04:43:01 np0005551604 systemd: systemd 252-59.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Dec  9 04:43:01 np0005551604 systemd: Detected virtualization kvm.
Dec  9 04:43:01 np0005551604 systemd: Detected architecture x86-64.
Dec  9 04:43:01 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 04:43:01 np0005551604 systemd: initrd-switch-root.service: Deactivated successfully.
Dec  9 04:43:01 np0005551604 systemd: Stopped Switch Root.
Dec  9 04:43:01 np0005551604 systemd: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Dec  9 04:43:01 np0005551604 systemd: Created slice Slice /system/getty.
Dec  9 04:43:01 np0005551604 systemd: Created slice Slice /system/serial-getty.
Dec  9 04:43:01 np0005551604 systemd: Created slice Slice /system/sshd-keygen.
Dec  9 04:43:01 np0005551604 systemd: Created slice User and Session Slice.
Dec  9 04:43:01 np0005551604 systemd: Started Dispatch Password Requests to Console Directory Watch.
Dec  9 04:43:01 np0005551604 systemd: Started Forward Password Requests to Wall Directory Watch.
Dec  9 04:43:01 np0005551604 systemd: Set up automount Arbitrary Executable File Formats File System Automount Point.
Dec  9 04:43:01 np0005551604 systemd: Reached target Local Encrypted Volumes.
Dec  9 04:43:01 np0005551604 systemd: Stopped target Switch Root.
Dec  9 04:43:01 np0005551604 systemd: Stopped target Initrd File Systems.
Dec  9 04:43:01 np0005551604 systemd: Stopped target Initrd Root File System.
Dec  9 04:43:01 np0005551604 systemd: Reached target Local Integrity Protected Volumes.
Dec  9 04:43:01 np0005551604 systemd: Reached target Path Units.
Dec  9 04:43:01 np0005551604 systemd: Reached target rpc_pipefs.target.
Dec  9 04:43:01 np0005551604 systemd: Reached target Slice Units.
Dec  9 04:43:01 np0005551604 systemd: Reached target Swaps.
Dec  9 04:43:01 np0005551604 systemd: Reached target Local Verity Protected Volumes.
Dec  9 04:43:01 np0005551604 systemd: Listening on RPCbind Server Activation Socket.
Dec  9 04:43:01 np0005551604 systemd: Reached target RPC Port Mapper.
Dec  9 04:43:01 np0005551604 systemd: Listening on Process Core Dump Socket.
Dec  9 04:43:01 np0005551604 systemd: Listening on initctl Compatibility Named Pipe.
Dec  9 04:43:01 np0005551604 systemd: Listening on udev Control Socket.
Dec  9 04:43:01 np0005551604 systemd: Listening on udev Kernel Socket.
Dec  9 04:43:01 np0005551604 systemd: Mounting Huge Pages File System...
Dec  9 04:43:01 np0005551604 systemd: Mounting POSIX Message Queue File System...
Dec  9 04:43:01 np0005551604 systemd: Mounting Kernel Debug File System...
Dec  9 04:43:01 np0005551604 systemd: Mounting Kernel Trace File System...
Dec  9 04:43:01 np0005551604 systemd: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  9 04:43:01 np0005551604 systemd: Starting Create List of Static Device Nodes...
Dec  9 04:43:01 np0005551604 systemd: Starting Load Kernel Module configfs...
Dec  9 04:43:01 np0005551604 systemd: Starting Load Kernel Module drm...
Dec  9 04:43:01 np0005551604 systemd: Starting Load Kernel Module efi_pstore...
Dec  9 04:43:01 np0005551604 systemd: Starting Load Kernel Module fuse...
Dec  9 04:43:01 np0005551604 systemd: Starting Read and set NIS domainname from /etc/sysconfig/network...
Dec  9 04:43:01 np0005551604 systemd: systemd-fsck-root.service: Deactivated successfully.
Dec  9 04:43:01 np0005551604 systemd: Stopped File System Check on Root Device.
Dec  9 04:43:01 np0005551604 systemd: Stopped Journal Service.
Dec  9 04:43:01 np0005551604 systemd: Starting Journal Service...
Dec  9 04:43:01 np0005551604 systemd: Load Kernel Modules was skipped because no trigger condition checks were met.
Dec  9 04:43:01 np0005551604 systemd: Starting Generate network units from Kernel command line...
Dec  9 04:43:01 np0005551604 systemd: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  9 04:43:01 np0005551604 systemd: Starting Remount Root and Kernel File Systems...
Dec  9 04:43:01 np0005551604 systemd: Repartition Root Disk was skipped because no trigger condition checks were met.
Dec  9 04:43:01 np0005551604 systemd: Starting Apply Kernel Variables...
Dec  9 04:43:01 np0005551604 kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Dec  9 04:43:01 np0005551604 kernel: fuse: init (API version 7.37)
Dec  9 04:43:01 np0005551604 systemd: Starting Coldplug All udev Devices...
Dec  9 04:43:01 np0005551604 systemd-journald[677]: Journal started
Dec  9 04:43:01 np0005551604 systemd-journald[677]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  9 04:43:01 np0005551604 systemd[1]: Queued start job for default target Multi-User System.
Dec  9 04:43:01 np0005551604 systemd[1]: systemd-journald.service: Deactivated successfully.
Dec  9 04:43:01 np0005551604 systemd: Mounted Huge Pages File System.
Dec  9 04:43:01 np0005551604 systemd: Started Journal Service.
Dec  9 04:43:01 np0005551604 systemd[1]: Mounted POSIX Message Queue File System.
Dec  9 04:43:01 np0005551604 systemd[1]: Mounted Kernel Debug File System.
Dec  9 04:43:01 np0005551604 systemd[1]: Mounted Kernel Trace File System.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Create List of Static Device Nodes.
Dec  9 04:43:01 np0005551604 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Load Kernel Module configfs.
Dec  9 04:43:01 np0005551604 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Load Kernel Module efi_pstore.
Dec  9 04:43:01 np0005551604 systemd[1]: modprobe@fuse.service: Deactivated successfully.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Load Kernel Module fuse.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Generate network units from Kernel command line.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Remount Root and Kernel File Systems.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Apply Kernel Variables.
Dec  9 04:43:01 np0005551604 kernel: ACPI: bus type drm_connector registered
Dec  9 04:43:01 np0005551604 systemd[1]: Mounting FUSE Control File System...
Dec  9 04:43:01 np0005551604 systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  9 04:43:01 np0005551604 systemd[1]: Starting Rebuild Hardware Database...
Dec  9 04:43:01 np0005551604 systemd[1]: Starting Flush Journal to Persistent Storage...
Dec  9 04:43:01 np0005551604 systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Dec  9 04:43:01 np0005551604 systemd[1]: Starting Load/Save OS Random Seed...
Dec  9 04:43:01 np0005551604 systemd[1]: Starting Create System Users...
Dec  9 04:43:01 np0005551604 systemd[1]: modprobe@drm.service: Deactivated successfully.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Load Kernel Module drm.
Dec  9 04:43:01 np0005551604 systemd-journald[677]: Runtime Journal (/run/log/journal/4d4ef2323cc3337bbfd9081b2a323b4e) is 8.0M, max 153.6M, 145.6M free.
Dec  9 04:43:01 np0005551604 systemd-journald[677]: Received client request to flush runtime journal.
Dec  9 04:43:01 np0005551604 systemd[1]: Mounted FUSE Control File System.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Flush Journal to Persistent Storage.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Load/Save OS Random Seed.
Dec  9 04:43:01 np0005551604 systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Create System Users.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Coldplug All udev Devices.
Dec  9 04:43:01 np0005551604 systemd[1]: Starting Create Static Device Nodes in /dev...
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Create Static Device Nodes in /dev.
Dec  9 04:43:01 np0005551604 systemd[1]: Reached target Preparation for Local File Systems.
Dec  9 04:43:01 np0005551604 systemd[1]: Reached target Local File Systems.
Dec  9 04:43:01 np0005551604 systemd[1]: Starting Rebuild Dynamic Linker Cache...
Dec  9 04:43:01 np0005551604 systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Dec  9 04:43:01 np0005551604 systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Dec  9 04:43:01 np0005551604 systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Dec  9 04:43:01 np0005551604 systemd[1]: Starting Automatic Boot Loader Update...
Dec  9 04:43:01 np0005551604 systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Dec  9 04:43:01 np0005551604 systemd[1]: Starting Create Volatile Files and Directories...
Dec  9 04:43:01 np0005551604 bootctl[695]: Couldn't find EFI system partition, skipping.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Automatic Boot Loader Update.
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Create Volatile Files and Directories.
Dec  9 04:43:01 np0005551604 systemd[1]: Starting Security Auditing Service...
Dec  9 04:43:01 np0005551604 systemd[1]: Starting RPC Bind...
Dec  9 04:43:01 np0005551604 systemd[1]: Starting Rebuild Journal Catalog...
Dec  9 04:43:01 np0005551604 auditd[700]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Dec  9 04:43:01 np0005551604 auditd[700]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Dec  9 04:43:01 np0005551604 systemd[1]: Finished Rebuild Journal Catalog.
Dec  9 04:43:01 np0005551604 systemd[1]: Started RPC Bind.
Dec  9 04:43:02 np0005551604 augenrules[706]: /sbin/augenrules: No change
Dec  9 04:43:02 np0005551604 augenrules[721]: No rules
Dec  9 04:43:02 np0005551604 augenrules[721]: enabled 1
Dec  9 04:43:02 np0005551604 augenrules[721]: failure 1
Dec  9 04:43:02 np0005551604 augenrules[721]: pid 700
Dec  9 04:43:02 np0005551604 augenrules[721]: rate_limit 0
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog_limit 8192
Dec  9 04:43:02 np0005551604 augenrules[721]: lost 0
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog 0
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog_wait_time 60000
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog_wait_time_actual 0
Dec  9 04:43:02 np0005551604 augenrules[721]: enabled 1
Dec  9 04:43:02 np0005551604 augenrules[721]: failure 1
Dec  9 04:43:02 np0005551604 augenrules[721]: pid 700
Dec  9 04:43:02 np0005551604 augenrules[721]: rate_limit 0
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog_limit 8192
Dec  9 04:43:02 np0005551604 augenrules[721]: lost 0
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog 3
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog_wait_time 60000
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog_wait_time_actual 0
Dec  9 04:43:02 np0005551604 augenrules[721]: enabled 1
Dec  9 04:43:02 np0005551604 augenrules[721]: failure 1
Dec  9 04:43:02 np0005551604 augenrules[721]: pid 700
Dec  9 04:43:02 np0005551604 augenrules[721]: rate_limit 0
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog_limit 8192
Dec  9 04:43:02 np0005551604 augenrules[721]: lost 0
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog 0
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog_wait_time 60000
Dec  9 04:43:02 np0005551604 augenrules[721]: backlog_wait_time_actual 0
Dec  9 04:43:02 np0005551604 systemd[1]: Started Security Auditing Service.
Dec  9 04:43:02 np0005551604 systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Dec  9 04:43:02 np0005551604 systemd[1]: Finished Rebuild Dynamic Linker Cache.
Dec  9 04:43:02 np0005551604 systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Dec  9 04:43:02 np0005551604 systemd[1]: Finished Rebuild Hardware Database.
Dec  9 04:43:02 np0005551604 systemd[1]: Starting Rule-based Manager for Device Events and Files...
Dec  9 04:43:02 np0005551604 systemd[1]: Starting Update is Completed...
Dec  9 04:43:02 np0005551604 systemd[1]: Finished Update is Completed.
Dec  9 04:43:02 np0005551604 systemd-udevd[729]: Using default interface naming scheme 'rhel-9.0'.
Dec  9 04:43:02 np0005551604 systemd[1]: Started Rule-based Manager for Device Events and Files.
Dec  9 04:43:02 np0005551604 systemd[1]: Reached target System Initialization.
Dec  9 04:43:02 np0005551604 systemd[1]: Started dnf makecache --timer.
Dec  9 04:43:02 np0005551604 systemd[1]: Started Daily rotation of log files.
Dec  9 04:43:02 np0005551604 systemd[1]: Started Daily Cleanup of Temporary Directories.
Dec  9 04:43:02 np0005551604 systemd[1]: Reached target Timer Units.
Dec  9 04:43:02 np0005551604 systemd[1]: Listening on D-Bus System Message Bus Socket.
Dec  9 04:43:02 np0005551604 systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Dec  9 04:43:02 np0005551604 systemd[1]: Reached target Socket Units.
Dec  9 04:43:02 np0005551604 systemd-udevd[738]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 04:43:02 np0005551604 systemd[1]: Starting D-Bus System Message Bus...
Dec  9 04:43:02 np0005551604 systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  9 04:43:02 np0005551604 systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Dec  9 04:43:02 np0005551604 systemd[1]: Starting Load Kernel Module configfs...
Dec  9 04:43:02 np0005551604 systemd[1]: modprobe@configfs.service: Deactivated successfully.
Dec  9 04:43:02 np0005551604 systemd[1]: Finished Load Kernel Module configfs.
Dec  9 04:43:02 np0005551604 systemd[1]: Started D-Bus System Message Bus.
Dec  9 04:43:02 np0005551604 systemd[1]: Reached target Basic System.
Dec  9 04:43:02 np0005551604 dbus-broker-lau[768]: Ready
Dec  9 04:43:02 np0005551604 systemd[1]: Starting NTP client/server...
Dec  9 04:43:02 np0005551604 systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Dec  9 04:43:02 np0005551604 systemd[1]: Starting Restore /run/initramfs on shutdown...
Dec  9 04:43:02 np0005551604 kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Dec  9 04:43:02 np0005551604 chronyd[784]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  9 04:43:02 np0005551604 chronyd[784]: Loaded 0 symmetric keys
Dec  9 04:43:02 np0005551604 chronyd[784]: Using right/UTC timezone to obtain leap second data
Dec  9 04:43:02 np0005551604 chronyd[784]: Loaded seccomp filter (level 2)
Dec  9 04:43:02 np0005551604 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Dec  9 04:43:02 np0005551604 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Dec  9 04:43:02 np0005551604 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Dec  9 04:43:02 np0005551604 systemd[1]: Starting IPv4 firewall with iptables...
Dec  9 04:43:02 np0005551604 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Dec  9 04:43:02 np0005551604 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Dec  9 04:43:02 np0005551604 kernel: kvm_amd: TSC scaling supported
Dec  9 04:43:02 np0005551604 kernel: kvm_amd: Nested Virtualization enabled
Dec  9 04:43:02 np0005551604 kernel: kvm_amd: Nested Paging enabled
Dec  9 04:43:02 np0005551604 kernel: kvm_amd: LBR virtualization supported
Dec  9 04:43:02 np0005551604 kernel: Console: switching to colour dummy device 80x25
Dec  9 04:43:02 np0005551604 kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Dec  9 04:43:02 np0005551604 kernel: [drm] features: -context_init
Dec  9 04:43:02 np0005551604 kernel: [drm] number of scanouts: 1
Dec  9 04:43:02 np0005551604 kernel: [drm] number of cap sets: 0
Dec  9 04:43:02 np0005551604 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Dec  9 04:43:02 np0005551604 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Dec  9 04:43:02 np0005551604 kernel: Console: switching to colour frame buffer device 128x48
Dec  9 04:43:02 np0005551604 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Dec  9 04:43:02 np0005551604 systemd[1]: Started irqbalance daemon.
Dec  9 04:43:02 np0005551604 systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Dec  9 04:43:02 np0005551604 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  9 04:43:02 np0005551604 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  9 04:43:02 np0005551604 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  9 04:43:02 np0005551604 systemd[1]: Reached target sshd-keygen.target.
Dec  9 04:43:02 np0005551604 systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Dec  9 04:43:02 np0005551604 systemd[1]: Reached target User and Group Name Lookups.
Dec  9 04:43:02 np0005551604 systemd[1]: Starting User Login Management...
Dec  9 04:43:02 np0005551604 systemd[1]: Started NTP client/server.
Dec  9 04:43:02 np0005551604 systemd[1]: Finished Restore /run/initramfs on shutdown.
Dec  9 04:43:02 np0005551604 kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Dec  9 04:43:02 np0005551604 kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Dec  9 04:43:02 np0005551604 systemd-logind[806]: New seat seat0.
Dec  9 04:43:02 np0005551604 systemd-logind[806]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  9 04:43:02 np0005551604 systemd-logind[806]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  9 04:43:02 np0005551604 systemd[1]: Started User Login Management.
Dec  9 04:43:02 np0005551604 iptables.init[785]: iptables: Applying firewall rules: [  OK  ]
Dec  9 04:43:02 np0005551604 systemd[1]: Finished IPv4 firewall with iptables.
Dec  9 04:43:03 np0005551604 cloud-init[837]: Cloud-init v. 24.4-7.el9 running 'init-local' at Tue, 09 Dec 2025 09:43:03 +0000. Up 7.91 seconds.
Dec  9 04:43:03 np0005551604 systemd[1]: run-cloud\x2dinit-tmp-tmp0n831rvl.mount: Deactivated successfully.
Dec  9 04:43:03 np0005551604 systemd[1]: Starting Hostname Service...
Dec  9 04:43:03 np0005551604 systemd[1]: Started Hostname Service.
Dec  9 04:43:03 np0005551604 systemd-hostnamed[851]: Hostname set to <np0005551604.novalocal> (static)
Dec  9 04:43:03 np0005551604 systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Dec  9 04:43:03 np0005551604 systemd[1]: Reached target Preparation for Network.
Dec  9 04:43:03 np0005551604 systemd[1]: Starting Network Manager...
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6115] NetworkManager (version 1.54.2-1.el9) is starting... (boot:f43569a1-1096-4e67-91b2-bda287c55398)
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6118] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6175] manager[0x5653cee44000]: monitoring kernel firmware directory '/lib/firmware'.
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6219] hostname: hostname: using hostnamed
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6219] hostname: static hostname changed from (none) to "np0005551604.novalocal"
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6223] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6320] manager[0x5653cee44000]: rfkill: Wi-Fi hardware radio set enabled
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6320] manager[0x5653cee44000]: rfkill: WWAN hardware radio set enabled
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6351] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6351] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6352] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6352] manager: Networking is enabled by state file
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6354] settings: Loaded settings plugin: keyfile (internal)
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6362] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6379] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6389] dhcp: init: Using DHCP client 'internal'
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6391] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6401] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6407] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  9 04:43:03 np0005551604 systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6414] device (lo): Activation: starting connection 'lo' (4d2460cc-3851-4697-811d-bb6085f75db6)
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6421] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6423] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6446] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6449] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6451] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6453] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6455] device (eth0): carrier: link connected
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6458] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6464] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6469] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6475] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6476] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6478] manager: NetworkManager state is now CONNECTING
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6480] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6485] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6487] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  9 04:43:03 np0005551604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  9 04:43:03 np0005551604 systemd[1]: Started Network Manager.
Dec  9 04:43:03 np0005551604 systemd[1]: Reached target Network.
Dec  9 04:43:03 np0005551604 systemd[1]: Starting Network Manager Wait Online...
Dec  9 04:43:03 np0005551604 systemd[1]: Starting GSSAPI Proxy Daemon...
Dec  9 04:43:03 np0005551604 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6820] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6823] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  9 04:43:03 np0005551604 NetworkManager[856]: <info>  [1765273383.6828] device (lo): Activation: successful, device activated.
Dec  9 04:43:03 np0005551604 systemd[1]: Started GSSAPI Proxy Daemon.
Dec  9 04:43:03 np0005551604 systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Dec  9 04:43:03 np0005551604 systemd[1]: Reached target NFS client services.
Dec  9 04:43:03 np0005551604 systemd[1]: Reached target Preparation for Remote File Systems.
Dec  9 04:43:03 np0005551604 systemd[1]: Reached target Remote File Systems.
Dec  9 04:43:03 np0005551604 systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Dec  9 04:43:05 np0005551604 NetworkManager[856]: <info>  [1765273385.1497] dhcp4 (eth0): state changed new lease, address=38.102.83.201
Dec  9 04:43:05 np0005551604 NetworkManager[856]: <info>  [1765273385.1510] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  9 04:43:05 np0005551604 NetworkManager[856]: <info>  [1765273385.1533] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 04:43:05 np0005551604 NetworkManager[856]: <info>  [1765273385.1579] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 04:43:05 np0005551604 NetworkManager[856]: <info>  [1765273385.1581] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 04:43:05 np0005551604 NetworkManager[856]: <info>  [1765273385.1584] manager: NetworkManager state is now CONNECTED_SITE
Dec  9 04:43:05 np0005551604 NetworkManager[856]: <info>  [1765273385.1587] device (eth0): Activation: successful, device activated.
Dec  9 04:43:05 np0005551604 NetworkManager[856]: <info>  [1765273385.1592] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  9 04:43:05 np0005551604 NetworkManager[856]: <info>  [1765273385.1595] manager: startup complete
Dec  9 04:43:05 np0005551604 systemd[1]: Finished Network Manager Wait Online.
Dec  9 04:43:05 np0005551604 systemd[1]: Starting Cloud-init: Network Stage...
Dec  9 04:43:05 np0005551604 cloud-init[921]: Cloud-init v. 24.4-7.el9 running 'init' at Tue, 09 Dec 2025 09:43:05 +0000. Up 10.30 seconds.
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: |  eth0  | True |        38.102.83.201         | 255.255.255.0 | global | fa:16:3e:10:2e:e6 |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: |  eth0  | True | fe80::f816:3eff:fe10:2ee6/64 |       .       |  link  | fa:16:3e:10:2e:e6 |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: |   3   |    local    |    ::   |    eth0   |   U   |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: |   4   |  multicast  |    ::   |    eth0   |   U   |
Dec  9 04:43:05 np0005551604 cloud-init[921]: ci-info: +-------+-------------+---------+-----------+-------+
Dec  9 04:43:06 np0005551604 cloud-init[921]: Generating public/private rsa key pair.
Dec  9 04:43:06 np0005551604 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Dec  9 04:43:06 np0005551604 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Dec  9 04:43:06 np0005551604 cloud-init[921]: The key fingerprint is:
Dec  9 04:43:06 np0005551604 cloud-init[921]: SHA256:8x6ylHaRMbQ+jPb+nRI56qR2RqxhnGZJeFEu4+cG5C8 root@np0005551604.novalocal
Dec  9 04:43:06 np0005551604 cloud-init[921]: The key's randomart image is:
Dec  9 04:43:06 np0005551604 cloud-init[921]: +---[RSA 3072]----+
Dec  9 04:43:06 np0005551604 cloud-init[921]: |         .o      |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |        .o .     |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |       .+.=      |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |      .+o* +     |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |       +S+B  .   |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |       .OXoo+    |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |       +E+X. o   |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |       oo@+... . |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |       .o++...o  |
Dec  9 04:43:06 np0005551604 cloud-init[921]: +----[SHA256]-----+
Dec  9 04:43:06 np0005551604 cloud-init[921]: Generating public/private ecdsa key pair.
Dec  9 04:43:06 np0005551604 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Dec  9 04:43:06 np0005551604 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Dec  9 04:43:06 np0005551604 cloud-init[921]: The key fingerprint is:
Dec  9 04:43:06 np0005551604 cloud-init[921]: SHA256:q7BPSI9W8xkqDocR514/r20AaKBWJfXCAR/YXH8d4y4 root@np0005551604.novalocal
Dec  9 04:43:06 np0005551604 cloud-init[921]: The key's randomart image is:
Dec  9 04:43:06 np0005551604 cloud-init[921]: +---[ECDSA 256]---+
Dec  9 04:43:06 np0005551604 cloud-init[921]: |   oB=..     o   |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |  .o+o+ .   o o  |
Dec  9 04:43:06 np0005551604 cloud-init[921]: | .o..= . . . o   |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |.. +o o   . .    |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |. ..o +.S  E .   |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |   = * =.+  .    |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |  o O + *.       |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |   = = . +.      |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |    o.o .oo      |
Dec  9 04:43:06 np0005551604 cloud-init[921]: +----[SHA256]-----+
Dec  9 04:43:06 np0005551604 cloud-init[921]: Generating public/private ed25519 key pair.
Dec  9 04:43:06 np0005551604 cloud-init[921]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Dec  9 04:43:06 np0005551604 cloud-init[921]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Dec  9 04:43:06 np0005551604 cloud-init[921]: The key fingerprint is:
Dec  9 04:43:06 np0005551604 cloud-init[921]: SHA256:nk0BJT+6pnFEBUlVFVFdgAs/mr/6sbEKjFwlpRSlZ6I root@np0005551604.novalocal
Dec  9 04:43:06 np0005551604 cloud-init[921]: The key's randomart image is:
Dec  9 04:43:06 np0005551604 cloud-init[921]: +--[ED25519 256]--+
Dec  9 04:43:06 np0005551604 cloud-init[921]: |       .BB*..o===|
Dec  9 04:43:06 np0005551604 cloud-init[921]: |       ..O. .   .|
Dec  9 04:43:06 np0005551604 cloud-init[921]: |        * Bo .   |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |       o B o+    |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |      E S .o .   |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |     . * =o      |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |      + O ..o    |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |       = .  .=   |
Dec  9 04:43:06 np0005551604 cloud-init[921]: |      .   o+=.   |
Dec  9 04:43:06 np0005551604 cloud-init[921]: +----[SHA256]-----+
Dec  9 04:43:07 np0005551604 systemd[1]: Finished Cloud-init: Network Stage.
Dec  9 04:43:07 np0005551604 systemd[1]: Reached target Cloud-config availability.
Dec  9 04:43:07 np0005551604 systemd[1]: Reached target Network is Online.
Dec  9 04:43:07 np0005551604 systemd[1]: Starting Cloud-init: Config Stage...
Dec  9 04:43:07 np0005551604 systemd[1]: Starting Crash recovery kernel arming...
Dec  9 04:43:07 np0005551604 systemd[1]: Starting Notify NFS peers of a restart...
Dec  9 04:43:07 np0005551604 systemd[1]: Starting System Logging Service...
Dec  9 04:43:07 np0005551604 sm-notify[1005]: Version 2.5.4 starting
Dec  9 04:43:07 np0005551604 systemd[1]: Starting OpenSSH server daemon...
Dec  9 04:43:07 np0005551604 systemd[1]: Starting Permit User Sessions...
Dec  9 04:43:07 np0005551604 systemd[1]: Started Notify NFS peers of a restart.
Dec  9 04:43:07 np0005551604 systemd[1]: Started OpenSSH server daemon.
Dec  9 04:43:07 np0005551604 systemd[1]: Finished Permit User Sessions.
Dec  9 04:43:07 np0005551604 systemd[1]: Started Command Scheduler.
Dec  9 04:43:07 np0005551604 systemd[1]: Started Getty on tty1.
Dec  9 04:43:07 np0005551604 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Dec  9 04:43:07 np0005551604 rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Dec  9 04:43:07 np0005551604 systemd[1]: Started Serial Getty on ttyS0.
Dec  9 04:43:07 np0005551604 systemd[1]: Reached target Login Prompts.
Dec  9 04:43:07 np0005551604 systemd[1]: Started System Logging Service.
Dec  9 04:43:07 np0005551604 systemd[1]: Reached target Multi-User System.
Dec  9 04:43:07 np0005551604 systemd[1]: Starting Record Runlevel Change in UTMP...
Dec  9 04:43:07 np0005551604 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Dec  9 04:43:07 np0005551604 systemd[1]: Finished Record Runlevel Change in UTMP.
Dec  9 04:43:07 np0005551604 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 04:43:07 np0005551604 kdumpctl[1019]: kdump: No kdump initial ramdisk found.
Dec  9 04:43:07 np0005551604 kdumpctl[1019]: kdump: Rebuilding /boot/initramfs-5.14.0-648.el9.x86_64kdump.img
Dec  9 04:43:07 np0005551604 cloud-init[1110]: Cloud-init v. 24.4-7.el9 running 'modules:config' at Tue, 09 Dec 2025 09:43:07 +0000. Up 12.17 seconds.
Dec  9 04:43:07 np0005551604 systemd[1]: Finished Cloud-init: Config Stage.
Dec  9 04:43:07 np0005551604 systemd[1]: Starting Cloud-init: Final Stage...
Dec  9 04:43:07 np0005551604 dracut[1284]: dracut-057-102.git20250818.el9
Dec  9 04:43:07 np0005551604 cloud-init[1302]: Cloud-init v. 24.4-7.el9 running 'modules:final' at Tue, 09 Dec 2025 09:43:07 +0000. Up 12.54 seconds.
Dec  9 04:43:07 np0005551604 cloud-init[1304]: #############################################################
Dec  9 04:43:07 np0005551604 cloud-init[1309]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Dec  9 04:43:07 np0005551604 cloud-init[1316]: 256 SHA256:q7BPSI9W8xkqDocR514/r20AaKBWJfXCAR/YXH8d4y4 root@np0005551604.novalocal (ECDSA)
Dec  9 04:43:07 np0005551604 dracut[1287]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/fcf6b761-831a-48a7-9f5f-068b5063763f /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-648.el9.x86_64kdump.img 5.14.0-648.el9.x86_64
Dec  9 04:43:07 np0005551604 cloud-init[1324]: 256 SHA256:nk0BJT+6pnFEBUlVFVFdgAs/mr/6sbEKjFwlpRSlZ6I root@np0005551604.novalocal (ED25519)
Dec  9 04:43:07 np0005551604 cloud-init[1331]: 3072 SHA256:8x6ylHaRMbQ+jPb+nRI56qR2RqxhnGZJeFEu4+cG5C8 root@np0005551604.novalocal (RSA)
Dec  9 04:43:07 np0005551604 cloud-init[1333]: -----END SSH HOST KEY FINGERPRINTS-----
Dec  9 04:43:07 np0005551604 cloud-init[1335]: #############################################################
Dec  9 04:43:07 np0005551604 cloud-init[1302]: Cloud-init v. 24.4-7.el9 finished at Tue, 09 Dec 2025 09:43:07 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 12.71 seconds
Dec  9 04:43:07 np0005551604 systemd[1]: Finished Cloud-init: Final Stage.
Dec  9 04:43:07 np0005551604 systemd[1]: Reached target Cloud-init target.
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: memstrack is not available
Dec  9 04:43:08 np0005551604 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Dec  9 04:43:08 np0005551604 dracut[1287]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Dec  9 04:43:08 np0005551604 dracut[1287]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Dec  9 04:43:09 np0005551604 dracut[1287]: memstrack is not available
Dec  9 04:43:09 np0005551604 dracut[1287]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Dec  9 04:43:09 np0005551604 dracut[1287]: *** Including module: systemd ***
Dec  9 04:43:09 np0005551604 dracut[1287]: *** Including module: fips ***
Dec  9 04:43:09 np0005551604 dracut[1287]: *** Including module: systemd-initrd ***
Dec  9 04:43:09 np0005551604 dracut[1287]: *** Including module: i18n ***
Dec  9 04:43:10 np0005551604 dracut[1287]: *** Including module: drm ***
Dec  9 04:43:10 np0005551604 chronyd[784]: Selected source 162.159.200.123 (2.centos.pool.ntp.org)
Dec  9 04:43:10 np0005551604 chronyd[784]: System clock TAI offset set to 37 seconds
Dec  9 04:43:10 np0005551604 dracut[1287]: *** Including module: prefixdevname ***
Dec  9 04:43:10 np0005551604 dracut[1287]: *** Including module: kernel-modules ***
Dec  9 04:43:10 np0005551604 kernel: block vda: the capability attribute has been deprecated.
Dec  9 04:43:11 np0005551604 dracut[1287]: *** Including module: kernel-modules-extra ***
Dec  9 04:43:11 np0005551604 dracut[1287]: *** Including module: qemu ***
Dec  9 04:43:11 np0005551604 dracut[1287]: *** Including module: fstab-sys ***
Dec  9 04:43:11 np0005551604 dracut[1287]: *** Including module: rootfs-block ***
Dec  9 04:43:11 np0005551604 dracut[1287]: *** Including module: terminfo ***
Dec  9 04:43:11 np0005551604 dracut[1287]: *** Including module: udev-rules ***
Dec  9 04:43:12 np0005551604 dracut[1287]: Skipping udev rule: 91-permissions.rules
Dec  9 04:43:12 np0005551604 dracut[1287]: Skipping udev rule: 80-drivers-modprobe.rules
Dec  9 04:43:12 np0005551604 dracut[1287]: *** Including module: virtiofs ***
Dec  9 04:43:12 np0005551604 dracut[1287]: *** Including module: dracut-systemd ***
Dec  9 04:43:12 np0005551604 dracut[1287]: *** Including module: usrmount ***
Dec  9 04:43:12 np0005551604 dracut[1287]: *** Including module: base ***
Dec  9 04:43:12 np0005551604 dracut[1287]: *** Including module: fs-lib ***
Dec  9 04:43:12 np0005551604 dracut[1287]: *** Including module: kdumpbase ***
Dec  9 04:43:13 np0005551604 irqbalance[795]: Cannot change IRQ 35 affinity: Operation not permitted
Dec  9 04:43:13 np0005551604 irqbalance[795]: IRQ 35 affinity is now unmanaged
Dec  9 04:43:13 np0005551604 irqbalance[795]: Cannot change IRQ 33 affinity: Operation not permitted
Dec  9 04:43:13 np0005551604 irqbalance[795]: IRQ 33 affinity is now unmanaged
Dec  9 04:43:13 np0005551604 irqbalance[795]: Cannot change IRQ 31 affinity: Operation not permitted
Dec  9 04:43:13 np0005551604 irqbalance[795]: IRQ 31 affinity is now unmanaged
Dec  9 04:43:13 np0005551604 irqbalance[795]: Cannot change IRQ 28 affinity: Operation not permitted
Dec  9 04:43:13 np0005551604 irqbalance[795]: IRQ 28 affinity is now unmanaged
Dec  9 04:43:13 np0005551604 irqbalance[795]: Cannot change IRQ 34 affinity: Operation not permitted
Dec  9 04:43:13 np0005551604 irqbalance[795]: IRQ 34 affinity is now unmanaged
Dec  9 04:43:13 np0005551604 irqbalance[795]: Cannot change IRQ 32 affinity: Operation not permitted
Dec  9 04:43:13 np0005551604 irqbalance[795]: IRQ 32 affinity is now unmanaged
Dec  9 04:43:13 np0005551604 irqbalance[795]: Cannot change IRQ 30 affinity: Operation not permitted
Dec  9 04:43:13 np0005551604 irqbalance[795]: IRQ 30 affinity is now unmanaged
Dec  9 04:43:13 np0005551604 irqbalance[795]: Cannot change IRQ 29 affinity: Operation not permitted
Dec  9 04:43:13 np0005551604 irqbalance[795]: IRQ 29 affinity is now unmanaged
Dec  9 04:43:13 np0005551604 dracut[1287]: *** Including module: microcode_ctl-fw_dir_override ***
Dec  9 04:43:13 np0005551604 dracut[1287]:  microcode_ctl module: mangling fw_dir
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: configuration "intel" is ignored
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: configuration "intel-06-2d-07" is ignored
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: configuration "intel-06-4e-03" is ignored
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: configuration "intel-06-4f-01" is ignored
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: configuration "intel-06-55-04" is ignored
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: configuration "intel-06-5e-03" is ignored
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: configuration "intel-06-8c-01" is ignored
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: configuration "intel-06-8f-08" is ignored
Dec  9 04:43:13 np0005551604 dracut[1287]:    microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Dec  9 04:43:13 np0005551604 dracut[1287]: *** Including module: openssl ***
Dec  9 04:43:13 np0005551604 dracut[1287]: *** Including module: shutdown ***
Dec  9 04:43:13 np0005551604 dracut[1287]: *** Including module: squash ***
Dec  9 04:43:13 np0005551604 dracut[1287]: *** Including modules done ***
Dec  9 04:43:13 np0005551604 dracut[1287]: *** Installing kernel module dependencies ***
Dec  9 04:43:14 np0005551604 dracut[1287]: *** Installing kernel module dependencies done ***
Dec  9 04:43:14 np0005551604 dracut[1287]: *** Resolving executable dependencies ***
Dec  9 04:43:15 np0005551604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  9 04:43:16 np0005551604 dracut[1287]: *** Resolving executable dependencies done ***
Dec  9 04:43:16 np0005551604 dracut[1287]: *** Generating early-microcode cpio image ***
Dec  9 04:43:16 np0005551604 dracut[1287]: *** Store current command line parameters ***
Dec  9 04:43:16 np0005551604 dracut[1287]: Stored kernel commandline:
Dec  9 04:43:16 np0005551604 dracut[1287]: No dracut internal kernel commandline stored in the initramfs
Dec  9 04:43:16 np0005551604 dracut[1287]: *** Install squash loader ***
Dec  9 04:43:17 np0005551604 dracut[1287]: *** Squashing the files inside the initramfs ***
Dec  9 04:43:18 np0005551604 dracut[1287]: *** Squashing the files inside the initramfs done ***
Dec  9 04:43:18 np0005551604 dracut[1287]: *** Creating image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' ***
Dec  9 04:43:18 np0005551604 dracut[1287]: *** Hardlinking files ***
Dec  9 04:43:19 np0005551604 dracut[1287]: *** Hardlinking files done ***
Dec  9 04:43:19 np0005551604 dracut[1287]: *** Creating initramfs image file '/boot/initramfs-5.14.0-648.el9.x86_64kdump.img' done ***
Dec  9 04:43:20 np0005551604 kdumpctl[1019]: kdump: kexec: loaded kdump kernel
Dec  9 04:43:20 np0005551604 kdumpctl[1019]: kdump: Starting kdump: [OK]
Dec  9 04:43:20 np0005551604 systemd[1]: Finished Crash recovery kernel arming.
Dec  9 04:43:20 np0005551604 systemd[1]: Startup finished in 2.829s (kernel) + 3.019s (initrd) + 19.960s (userspace) = 25.808s.
Dec  9 04:43:33 np0005551604 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  9 04:43:45 np0005551604 systemd[1]: Created slice User Slice of UID 1000.
Dec  9 04:43:45 np0005551604 systemd[1]: Starting User Runtime Directory /run/user/1000...
Dec  9 04:43:45 np0005551604 systemd-logind[806]: New session 1 of user zuul.
Dec  9 04:43:45 np0005551604 systemd[1]: Finished User Runtime Directory /run/user/1000.
Dec  9 04:43:45 np0005551604 systemd[1]: Starting User Manager for UID 1000...
Dec  9 04:43:45 np0005551604 systemd[4301]: Queued start job for default target Main User Target.
Dec  9 04:43:45 np0005551604 systemd[4301]: Created slice User Application Slice.
Dec  9 04:43:45 np0005551604 systemd[4301]: Started Mark boot as successful after the user session has run 2 minutes.
Dec  9 04:43:45 np0005551604 systemd[4301]: Started Daily Cleanup of User's Temporary Directories.
Dec  9 04:43:45 np0005551604 systemd[4301]: Reached target Paths.
Dec  9 04:43:45 np0005551604 systemd[4301]: Reached target Timers.
Dec  9 04:43:45 np0005551604 systemd[4301]: Starting D-Bus User Message Bus Socket...
Dec  9 04:43:45 np0005551604 systemd[4301]: Starting Create User's Volatile Files and Directories...
Dec  9 04:43:45 np0005551604 systemd[4301]: Finished Create User's Volatile Files and Directories.
Dec  9 04:43:45 np0005551604 systemd[4301]: Listening on D-Bus User Message Bus Socket.
Dec  9 04:43:45 np0005551604 systemd[4301]: Reached target Sockets.
Dec  9 04:43:45 np0005551604 systemd[4301]: Reached target Basic System.
Dec  9 04:43:45 np0005551604 systemd[4301]: Reached target Main User Target.
Dec  9 04:43:45 np0005551604 systemd[4301]: Startup finished in 118ms.
Dec  9 04:43:45 np0005551604 systemd[1]: Started User Manager for UID 1000.
Dec  9 04:43:45 np0005551604 systemd[1]: Started Session 1 of User zuul.
Dec  9 04:43:46 np0005551604 python3[4383]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 04:43:48 np0005551604 python3[4411]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 04:43:55 np0005551604 python3[4469]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 04:43:56 np0005551604 python3[4509]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Dec  9 04:43:58 np0005551604 python3[4535]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDfkcz+sdBC5Hc6a3qciBGOfVToJT+Vi5tHJjyssf7GAu8+GUwHSBRHjCzVaRVCv34TNjQ0KR1a8RsTfkO5SOcTPVfafZ5Z/VdIy6+tlxb46kLefLVVzxfQCOsF1HmJvVAySMCNdoQ+/P72lP//rFYh61NHxYgXRFafnxyaoaOZ1c8sVTb5YaLKtOjNXEsdLedgrvPEcxUU5XZc7+KOaUKZmomh8rrCMDgTiLhX9N8mH5bOhO9jI3VPtGvuOSko8ccfWS4U39k5QeO1v6LIowwnF92n8KQk/gPnQ9fC8wl30ZAbVA82lhOIHOGhwfc1SpES3TYhycJVdlC1jaH2/g6Pq9QQDtFVl3Q88XPXdxi9ek1mE+VpQCFYkIs01tWM1J7YQdF8qhvsNcNB4MpecSt4pQWmAzo6qjnv0pWndbINJZbLQmUkHsy70K3iSMg1izw6a9CwGeKfJ95TGS5Q6OA5wuzhKqi8vB5NcEyGG6dm1BtCG5MxlKlmEN1dO7wGlaM= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:43:58 np0005551604 python3[4559]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:43:59 np0005551604 python3[4658]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:43:59 np0005551604 python3[4729]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765273438.6523213-207-66674816575962/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=951b2216678a4038aa595858720b060b_id_rsa follow=False checksum=7fbdc97ee47d482a9627e7e4b08e66077527b1cf backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:43:59 np0005551604 python3[4852]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:44:00 np0005551604 python3[4923]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765273439.5455399-240-273145969724459/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=951b2216678a4038aa595858720b060b_id_rsa.pub follow=False checksum=cfe72fad0635b201e54e2478e069e3b356b4f8c7 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:01 np0005551604 python3[4971]: ansible-ping Invoked with data=pong
Dec  9 04:44:02 np0005551604 python3[4995]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 04:44:04 np0005551604 python3[5053]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Dec  9 04:44:05 np0005551604 python3[5085]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:05 np0005551604 python3[5109]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:05 np0005551604 python3[5133]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:05 np0005551604 python3[5157]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:06 np0005551604 python3[5181]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:06 np0005551604 python3[5205]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:08 np0005551604 python3[5231]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:08 np0005551604 python3[5309]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:44:09 np0005551604 python3[5382]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765273448.157397-21-60973359415278/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:09 np0005551604 python3[5430]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:09 np0005551604 python3[5454]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:10 np0005551604 python3[5478]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:10 np0005551604 python3[5502]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:10 np0005551604 python3[5526]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:11 np0005551604 python3[5550]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:11 np0005551604 python3[5574]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:11 np0005551604 python3[5598]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:11 np0005551604 python3[5622]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:12 np0005551604 python3[5646]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:12 np0005551604 python3[5670]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:12 np0005551604 python3[5694]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:12 np0005551604 python3[5718]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:13 np0005551604 python3[5742]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:13 np0005551604 python3[5766]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:13 np0005551604 python3[5790]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:13 np0005551604 python3[5814]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:14 np0005551604 python3[5838]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:14 np0005551604 python3[5862]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:14 np0005551604 python3[5886]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:15 np0005551604 python3[5910]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:15 np0005551604 python3[5934]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:15 np0005551604 python3[5958]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:15 np0005551604 python3[5982]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:16 np0005551604 python3[6006]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:16 np0005551604 python3[6030]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:44:19 np0005551604 python3[6056]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  9 04:44:19 np0005551604 systemd[1]: Starting Time & Date Service...
Dec  9 04:44:19 np0005551604 systemd[1]: Started Time & Date Service.
Dec  9 04:44:19 np0005551604 systemd-timedated[6058]: Changed time zone to 'UTC' (UTC).
Dec  9 04:44:20 np0005551604 python3[6087]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:20 np0005551604 python3[6163]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:44:21 np0005551604 python3[6234]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1765273460.4512272-153-13661873418982/source _original_basename=tmpu8075k11 follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:21 np0005551604 python3[6334]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:44:21 np0005551604 python3[6405]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765273461.2445004-183-207328540469213/source _original_basename=tmp2eytex4x follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:22 np0005551604 python3[6507]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:44:22 np0005551604 python3[6580]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1765273462.3794315-231-25856136471834/source _original_basename=tmprkokdgb1 follow=False checksum=863ac53d108e41f2ca0bf1e77a656f71228bd1da backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:23 np0005551604 python3[6628]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:44:24 np0005551604 python3[6654]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:44:24 np0005551604 python3[6734]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:44:24 np0005551604 python3[6807]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1765273464.2072675-273-176116194027674/source _original_basename=tmp6h523_3_ follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:25 np0005551604 python3[6858]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163ef9-e89a-f51e-2f7d-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:44:26 np0005551604 python3[6886]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-f51e-2f7d-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Dec  9 04:44:27 np0005551604 python3[6914]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:49 np0005551604 python3[6940]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:44:49 np0005551604 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  9 04:45:23 np0005551604 kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Dec  9 04:45:23 np0005551604 kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Dec  9 04:45:23 np0005551604 kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Dec  9 04:45:23 np0005551604 kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Dec  9 04:45:23 np0005551604 kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Dec  9 04:45:23 np0005551604 kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Dec  9 04:45:23 np0005551604 kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Dec  9 04:45:23 np0005551604 kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Dec  9 04:45:23 np0005551604 kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Dec  9 04:45:23 np0005551604 kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Dec  9 04:45:23 np0005551604 NetworkManager[856]: <info>  [1765273523.4844] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  9 04:45:23 np0005551604 systemd-udevd[6943]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 04:45:23 np0005551604 NetworkManager[856]: <info>  [1765273523.5015] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 04:45:23 np0005551604 NetworkManager[856]: <info>  [1765273523.5045] settings: (eth1): created default wired connection 'Wired connection 1'
Dec  9 04:45:23 np0005551604 NetworkManager[856]: <info>  [1765273523.5048] device (eth1): carrier: link connected
Dec  9 04:45:23 np0005551604 NetworkManager[856]: <info>  [1765273523.5050] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Dec  9 04:45:23 np0005551604 NetworkManager[856]: <info>  [1765273523.5057] policy: auto-activating connection 'Wired connection 1' (c9d71888-3b72-38d5-8bab-6a45e2651a1e)
Dec  9 04:45:23 np0005551604 NetworkManager[856]: <info>  [1765273523.5061] device (eth1): Activation: starting connection 'Wired connection 1' (c9d71888-3b72-38d5-8bab-6a45e2651a1e)
Dec  9 04:45:23 np0005551604 NetworkManager[856]: <info>  [1765273523.5062] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 04:45:23 np0005551604 NetworkManager[856]: <info>  [1765273523.5065] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 04:45:23 np0005551604 NetworkManager[856]: <info>  [1765273523.5069] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 04:45:23 np0005551604 NetworkManager[856]: <info>  [1765273523.5074] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  9 04:45:24 np0005551604 python3[6970]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163ef9-e89a-007f-1033-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:45:31 np0005551604 python3[7050]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:45:31 np0005551604 python3[7123]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765273531.1486325-102-267079009406212/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=f926591e6fe74292e839d01c93dcd1f97740fbb7 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:45:32 np0005551604 python3[7173]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 04:45:32 np0005551604 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  9 04:45:32 np0005551604 systemd[1]: Stopped Network Manager Wait Online.
Dec  9 04:45:32 np0005551604 systemd[1]: Stopping Network Manager Wait Online...
Dec  9 04:45:32 np0005551604 NetworkManager[856]: <info>  [1765273532.7654] caught SIGTERM, shutting down normally.
Dec  9 04:45:32 np0005551604 systemd[1]: Stopping Network Manager...
Dec  9 04:45:32 np0005551604 NetworkManager[856]: <info>  [1765273532.7665] dhcp4 (eth0): canceled DHCP transaction
Dec  9 04:45:32 np0005551604 NetworkManager[856]: <info>  [1765273532.7665] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  9 04:45:32 np0005551604 NetworkManager[856]: <info>  [1765273532.7666] dhcp4 (eth0): state changed no lease
Dec  9 04:45:32 np0005551604 NetworkManager[856]: <info>  [1765273532.7669] manager: NetworkManager state is now CONNECTING
Dec  9 04:45:32 np0005551604 NetworkManager[856]: <info>  [1765273532.7809] dhcp4 (eth1): canceled DHCP transaction
Dec  9 04:45:32 np0005551604 NetworkManager[856]: <info>  [1765273532.7810] dhcp4 (eth1): state changed no lease
Dec  9 04:45:32 np0005551604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  9 04:45:32 np0005551604 NetworkManager[856]: <info>  [1765273532.7884] exiting (success)
Dec  9 04:45:32 np0005551604 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  9 04:45:32 np0005551604 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  9 04:45:32 np0005551604 systemd[1]: Stopped Network Manager.
Dec  9 04:45:32 np0005551604 systemd[1]: NetworkManager.service: Consumed 1.004s CPU time, 9.9M memory peak.
Dec  9 04:45:32 np0005551604 systemd[1]: Starting Network Manager...
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.8629] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:f43569a1-1096-4e67-91b2-bda287c55398)
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.8631] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.8719] manager[0x5636826db000]: monitoring kernel firmware directory '/lib/firmware'.
Dec  9 04:45:32 np0005551604 systemd[1]: Starting Hostname Service...
Dec  9 04:45:32 np0005551604 systemd[1]: Started Hostname Service.
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9857] hostname: hostname: using hostnamed
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9859] hostname: static hostname changed from (none) to "np0005551604.novalocal"
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9864] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9868] manager[0x5636826db000]: rfkill: Wi-Fi hardware radio set enabled
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9868] manager[0x5636826db000]: rfkill: WWAN hardware radio set enabled
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9900] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9900] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9901] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9901] manager: Networking is enabled by state file
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9904] settings: Loaded settings plugin: keyfile (internal)
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9907] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9933] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9942] dhcp: init: Using DHCP client 'internal'
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9945] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9950] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9956] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9964] device (lo): Activation: starting connection 'lo' (4d2460cc-3851-4697-811d-bb6085f75db6)
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9971] device (eth0): carrier: link connected
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9975] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9980] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9981] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9987] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  9 04:45:32 np0005551604 NetworkManager[7184]: <info>  [1765273532.9994] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0000] device (eth1): carrier: link connected
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0004] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0010] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (c9d71888-3b72-38d5-8bab-6a45e2651a1e) (indicated)
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0010] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0016] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0024] device (eth1): Activation: starting connection 'Wired connection 1' (c9d71888-3b72-38d5-8bab-6a45e2651a1e)
Dec  9 04:45:33 np0005551604 systemd[1]: Started Network Manager.
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0033] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0039] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0042] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0045] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0048] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0051] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0053] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0056] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0059] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0065] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0069] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0079] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0083] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0103] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0106] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0112] device (lo): Activation: successful, device activated.
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0120] dhcp4 (eth0): state changed new lease, address=38.102.83.201
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0128] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0195] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0227] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0230] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0234] manager: NetworkManager state is now CONNECTED_SITE
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0238] device (eth0): Activation: successful, device activated.
Dec  9 04:45:33 np0005551604 NetworkManager[7184]: <info>  [1765273533.0244] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  9 04:45:33 np0005551604 systemd[1]: Starting Network Manager Wait Online...
Dec  9 04:45:33 np0005551604 python3[7257]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163ef9-e89a-007f-1033-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:45:43 np0005551604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  9 04:46:00 np0005551604 systemd[4301]: Starting Mark boot as successful...
Dec  9 04:46:00 np0005551604 systemd[4301]: Finished Mark boot as successful.
Dec  9 04:46:03 np0005551604 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.1564] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  9 04:46:18 np0005551604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  9 04:46:18 np0005551604 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.1968] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.1972] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.1983] device (eth1): Activation: successful, device activated.
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.1994] manager: startup complete
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.1998] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <warn>  [1765273578.2007] device (eth1): Activation: failed for connection 'Wired connection 1'
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2019] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Dec  9 04:46:18 np0005551604 systemd[1]: Finished Network Manager Wait Online.
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2128] dhcp4 (eth1): canceled DHCP transaction
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2129] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2129] dhcp4 (eth1): state changed no lease
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2156] policy: auto-activating connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f)
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2164] device (eth1): Activation: starting connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f)
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2167] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2175] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2189] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2209] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2289] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2293] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 04:46:18 np0005551604 NetworkManager[7184]: <info>  [1765273578.2306] device (eth1): Activation: successful, device activated.
Dec  9 04:46:28 np0005551604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  9 04:46:33 np0005551604 systemd-logind[806]: Session 1 logged out. Waiting for processes to exit.
Dec  9 04:46:33 np0005551604 systemd-logind[806]: New session 3 of user zuul.
Dec  9 04:46:33 np0005551604 systemd[1]: Started Session 3 of User zuul.
Dec  9 04:46:33 np0005551604 python3[7367]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:46:33 np0005551604 python3[7440]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765273593.301405-259-42036110475273/source _original_basename=tmpmortlohk follow=False checksum=4ae1a859dd4000488bb89b035ed2aff6b8cccaf9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:46:36 np0005551604 systemd[1]: session-3.scope: Deactivated successfully.
Dec  9 04:46:36 np0005551604 systemd-logind[806]: Session 3 logged out. Waiting for processes to exit.
Dec  9 04:46:36 np0005551604 systemd-logind[806]: Removed session 3.
Dec  9 04:49:00 np0005551604 systemd[4301]: Created slice User Background Tasks Slice.
Dec  9 04:49:00 np0005551604 systemd[4301]: Starting Cleanup of User's Temporary Files and Directories...
Dec  9 04:49:00 np0005551604 systemd[4301]: Finished Cleanup of User's Temporary Files and Directories.
Dec  9 04:51:17 np0005551604 systemd-logind[806]: New session 4 of user zuul.
Dec  9 04:51:17 np0005551604 systemd[1]: Started Session 4 of User zuul.
Dec  9 04:51:18 np0005551604 python3[7500]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-74ef-b9a8-000000001f15-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:51:18 np0005551604 python3[7529]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:51:18 np0005551604 python3[7555]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:51:19 np0005551604 python3[7581]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:51:19 np0005551604 python3[7607]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:51:20 np0005551604 python3[7633]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:51:20 np0005551604 python3[7711]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:51:21 np0005551604 python3[7784]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765273880.6044233-490-85188014592243/source _original_basename=tmpw3nhvz2a follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:51:22 np0005551604 python3[7834]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 04:51:22 np0005551604 systemd[1]: Reloading.
Dec  9 04:51:22 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 04:51:24 np0005551604 python3[7890]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Dec  9 04:51:24 np0005551604 python3[7916]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:51:24 np0005551604 python3[7944]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:51:25 np0005551604 python3[7972]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:51:25 np0005551604 python3[8000]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max#012 _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:51:26 np0005551604 python3[8027]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-74ef-b9a8-000000001f1c-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:51:26 np0005551604 python3[8057]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  9 04:51:28 np0005551604 systemd[1]: session-4.scope: Deactivated successfully.
Dec  9 04:51:28 np0005551604 systemd[1]: session-4.scope: Consumed 4.092s CPU time.
Dec  9 04:51:28 np0005551604 systemd-logind[806]: Session 4 logged out. Waiting for processes to exit.
Dec  9 04:51:28 np0005551604 systemd-logind[806]: Removed session 4.
Dec  9 04:51:30 np0005551604 systemd-logind[806]: New session 5 of user zuul.
Dec  9 04:51:30 np0005551604 systemd[1]: Started Session 5 of User zuul.
Dec  9 04:51:30 np0005551604 python3[8090]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Dec  9 04:51:46 np0005551604 kernel: SELinux:  Converting 383 SID table entries...
Dec  9 04:51:46 np0005551604 kernel: SELinux:  policy capability network_peer_controls=1
Dec  9 04:51:46 np0005551604 kernel: SELinux:  policy capability open_perms=1
Dec  9 04:51:46 np0005551604 kernel: SELinux:  policy capability extended_socket_class=1
Dec  9 04:51:46 np0005551604 kernel: SELinux:  policy capability always_check_network=0
Dec  9 04:51:46 np0005551604 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  9 04:51:46 np0005551604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  9 04:51:46 np0005551604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  9 04:51:55 np0005551604 kernel: SELinux:  Converting 383 SID table entries...
Dec  9 04:51:55 np0005551604 kernel: SELinux:  policy capability network_peer_controls=1
Dec  9 04:51:55 np0005551604 kernel: SELinux:  policy capability open_perms=1
Dec  9 04:51:55 np0005551604 kernel: SELinux:  policy capability extended_socket_class=1
Dec  9 04:51:55 np0005551604 kernel: SELinux:  policy capability always_check_network=0
Dec  9 04:51:55 np0005551604 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  9 04:51:55 np0005551604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  9 04:51:55 np0005551604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  9 04:52:04 np0005551604 kernel: SELinux:  Converting 383 SID table entries...
Dec  9 04:52:04 np0005551604 kernel: SELinux:  policy capability network_peer_controls=1
Dec  9 04:52:04 np0005551604 kernel: SELinux:  policy capability open_perms=1
Dec  9 04:52:04 np0005551604 kernel: SELinux:  policy capability extended_socket_class=1
Dec  9 04:52:04 np0005551604 kernel: SELinux:  policy capability always_check_network=0
Dec  9 04:52:04 np0005551604 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  9 04:52:04 np0005551604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  9 04:52:04 np0005551604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  9 04:52:06 np0005551604 setsebool[8156]: The virt_use_nfs policy boolean was changed to 1 by root
Dec  9 04:52:06 np0005551604 setsebool[8156]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Dec  9 04:52:17 np0005551604 kernel: SELinux:  Converting 386 SID table entries...
Dec  9 04:52:17 np0005551604 kernel: SELinux:  policy capability network_peer_controls=1
Dec  9 04:52:17 np0005551604 kernel: SELinux:  policy capability open_perms=1
Dec  9 04:52:17 np0005551604 kernel: SELinux:  policy capability extended_socket_class=1
Dec  9 04:52:17 np0005551604 kernel: SELinux:  policy capability always_check_network=0
Dec  9 04:52:17 np0005551604 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  9 04:52:17 np0005551604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  9 04:52:17 np0005551604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  9 04:52:36 np0005551604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  9 04:52:36 np0005551604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  9 04:52:36 np0005551604 systemd[1]: Starting man-db-cache-update.service...
Dec  9 04:52:36 np0005551604 systemd[1]: Reloading.
Dec  9 04:52:36 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 04:52:36 np0005551604 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  9 04:52:51 np0005551604 python3[16973]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"#012 _uses_shell=True zuul_log_id=fa163ef9-e89a-8f0d-a351-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 04:52:52 np0005551604 kernel: evm: overlay not supported
Dec  9 04:52:52 np0005551604 systemd[4301]: Starting D-Bus User Message Bus...
Dec  9 04:52:52 np0005551604 dbus-broker-launch[17476]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Dec  9 04:52:52 np0005551604 dbus-broker-launch[17476]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Dec  9 04:52:52 np0005551604 systemd[4301]: Started D-Bus User Message Bus.
Dec  9 04:52:52 np0005551604 dbus-broker-lau[17476]: Ready
Dec  9 04:52:52 np0005551604 systemd[4301]: selinux: avc:  op=load_policy lsm=selinux seqno=6 res=1
Dec  9 04:52:52 np0005551604 systemd[4301]: Created slice Slice /user.
Dec  9 04:52:52 np0005551604 systemd[4301]: podman-17409.scope: unit configures an IP firewall, but not running as root.
Dec  9 04:52:52 np0005551604 systemd[4301]: (This warning is only shown for the first unit using IP firewalling.)
Dec  9 04:52:52 np0005551604 systemd[4301]: Started podman-17409.scope.
Dec  9 04:52:52 np0005551604 systemd[4301]: Started podman-pause-1ca8aea7.scope.
Dec  9 04:52:53 np0005551604 python3[17970]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]#012location = "38.102.83.20:5001"#012insecure = true path=/etc/containers/registries.conf block=[[registry]]#012location = "38.102.83.20:5001"#012insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:52:53 np0005551604 python3[17970]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Dec  9 04:52:54 np0005551604 systemd[1]: session-5.scope: Deactivated successfully.
Dec  9 04:52:54 np0005551604 systemd[1]: session-5.scope: Consumed 1min 3.547s CPU time.
Dec  9 04:52:54 np0005551604 systemd-logind[806]: Session 5 logged out. Waiting for processes to exit.
Dec  9 04:52:54 np0005551604 systemd-logind[806]: Removed session 5.
Dec  9 04:53:21 np0005551604 systemd-logind[806]: New session 6 of user zuul.
Dec  9 04:53:21 np0005551604 systemd[1]: Started Session 6 of User zuul.
Dec  9 04:53:22 np0005551604 python3[28780]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ378N+Oz5m8eLVC8HHlrxMbp1qIUGFsk3C6HoxKm6dcQaGp03ZHLJaCYgcfGkRl7+5RL+g4qxcj1Em4fs9vNXY= zuul@np0005551603.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:53:22 np0005551604 python3[28957]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ378N+Oz5m8eLVC8HHlrxMbp1qIUGFsk3C6HoxKm6dcQaGp03ZHLJaCYgcfGkRl7+5RL+g4qxcj1Em4fs9vNXY= zuul@np0005551603.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:53:23 np0005551604 python3[29334]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005551604.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Dec  9 04:53:23 np0005551604 python3[29534]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJ378N+Oz5m8eLVC8HHlrxMbp1qIUGFsk3C6HoxKm6dcQaGp03ZHLJaCYgcfGkRl7+5RL+g4qxcj1Em4fs9vNXY= zuul@np0005551603.novalocal#012 manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Dec  9 04:53:24 np0005551604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  9 04:53:24 np0005551604 systemd[1]: Finished man-db-cache-update.service.
Dec  9 04:53:24 np0005551604 systemd[1]: man-db-cache-update.service: Consumed 55.825s CPU time.
Dec  9 04:53:24 np0005551604 systemd[1]: run-r86525ca06c184a608d590d71624be2a0.service: Deactivated successfully.
Dec  9 04:53:24 np0005551604 python3[29790]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:53:24 np0005551604 python3[29878]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765274003.9499478-135-44035750658726/source _original_basename=tmpmntqbjhy follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:53:25 np0005551604 python3[29928]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Dec  9 04:53:25 np0005551604 systemd[1]: Starting Hostname Service...
Dec  9 04:53:25 np0005551604 systemd[1]: Started Hostname Service.
Dec  9 04:53:25 np0005551604 systemd-hostnamed[29932]: Changed pretty hostname to 'compute-0'
Dec  9 04:53:25 np0005551604 systemd-hostnamed[29932]: Hostname set to <compute-0> (static)
Dec  9 04:53:25 np0005551604 NetworkManager[7184]: <info>  [1765274005.7394] hostname: static hostname changed from "np0005551604.novalocal" to "compute-0"
Dec  9 04:53:25 np0005551604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  9 04:53:25 np0005551604 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  9 04:53:26 np0005551604 systemd[1]: session-6.scope: Deactivated successfully.
Dec  9 04:53:26 np0005551604 systemd[1]: session-6.scope: Consumed 2.415s CPU time.
Dec  9 04:53:26 np0005551604 systemd-logind[806]: Session 6 logged out. Waiting for processes to exit.
Dec  9 04:53:26 np0005551604 systemd-logind[806]: Removed session 6.
Dec  9 04:53:35 np0005551604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  9 04:53:55 np0005551604 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  9 04:58:00 np0005551604 systemd[1]: Starting Cleanup of Temporary Directories...
Dec  9 04:58:00 np0005551604 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Dec  9 04:58:00 np0005551604 systemd[1]: Finished Cleanup of Temporary Directories.
Dec  9 04:58:00 np0005551604 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Dec  9 04:58:30 np0005551604 systemd-logind[806]: New session 7 of user zuul.
Dec  9 04:58:30 np0005551604 systemd[1]: Started Session 7 of User zuul.
Dec  9 04:58:30 np0005551604 python3[30030]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 04:58:32 np0005551604 python3[30148]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:58:33 np0005551604 python3[30221]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:58:33 np0005551604 python3[30247]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:58:33 np0005551604 python3[30320]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:58:34 np0005551604 python3[30346]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:58:34 np0005551604 python3[30419]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:58:34 np0005551604 python3[30445]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:58:35 np0005551604 python3[30518]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:58:35 np0005551604 python3[30544]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:58:35 np0005551604 python3[30617]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:58:36 np0005551604 python3[30643]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:58:36 np0005551604 python3[30716]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 04:58:36 np0005551604 python3[30742]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Dec  9 04:58:37 np0005551604 python3[30815]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1765274312.2476404-33644-235297562298365/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:02:03 np0005551604 python3[30890]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:07:03 np0005551604 systemd[1]: session-7.scope: Deactivated successfully.
Dec  9 05:07:03 np0005551604 systemd[1]: session-7.scope: Consumed 5.352s CPU time.
Dec  9 05:07:03 np0005551604 systemd-logind[806]: Session 7 logged out. Waiting for processes to exit.
Dec  9 05:07:03 np0005551604 systemd-logind[806]: Removed session 7.
Dec  9 05:17:55 np0005551604 systemd-logind[806]: New session 8 of user zuul.
Dec  9 05:17:55 np0005551604 systemd[1]: Started Session 8 of User zuul.
Dec  9 05:17:56 np0005551604 python3.9[31058]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:17:57 np0005551604 python3.9[31239]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail#012pushd /var/tmp#012curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz#012pushd repo-setup-main#012python3 -m venv ./venv#012PBR_VERSION=0.0.0 ./venv/bin/pip install ./#012./venv/bin/repo-setup current-podified -b antelope#012popd#012rm -rf repo-setup-main#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:18:05 np0005551604 systemd[1]: session-8.scope: Deactivated successfully.
Dec  9 05:18:05 np0005551604 systemd[1]: session-8.scope: Consumed 8.334s CPU time.
Dec  9 05:18:05 np0005551604 systemd-logind[806]: Session 8 logged out. Waiting for processes to exit.
Dec  9 05:18:05 np0005551604 systemd-logind[806]: Removed session 8.
Dec  9 05:18:25 np0005551604 systemd-logind[806]: New session 9 of user zuul.
Dec  9 05:18:25 np0005551604 systemd[1]: Started Session 9 of User zuul.
Dec  9 05:18:27 np0005551604 python3.9[31450]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:18:27 np0005551604 systemd[1]: session-9.scope: Deactivated successfully.
Dec  9 05:18:27 np0005551604 systemd-logind[806]: Session 9 logged out. Waiting for processes to exit.
Dec  9 05:18:27 np0005551604 systemd-logind[806]: Removed session 9.
Dec  9 05:18:43 np0005551604 systemd-logind[806]: New session 10 of user zuul.
Dec  9 05:18:43 np0005551604 systemd[1]: Started Session 10 of User zuul.
Dec  9 05:18:44 np0005551604 python3.9[31631]: ansible-ansible.legacy.ping Invoked with data=pong
Dec  9 05:18:46 np0005551604 python3.9[31805]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:18:47 np0005551604 python3.9[31957]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:18:48 np0005551604 python3.9[32110]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:18:49 np0005551604 python3.9[32262]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:18:50 np0005551604 python3.9[32414]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:18:51 np0005551604 python3.9[32537]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275529.664212-73-46966097715200/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:18:51 np0005551604 python3.9[32689]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:18:53 np0005551604 python3.9[32845]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:18:54 np0005551604 python3.9[32997]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:18:55 np0005551604 python3.9[33147]: ansible-ansible.builtin.service_facts Invoked
Dec  9 05:19:00 np0005551604 python3.9[33400]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:19:01 np0005551604 python3.9[33550]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:19:02 np0005551604 python3.9[33704]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:19:03 np0005551604 python3.9[33862]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:19:05 np0005551604 python3.9[33946]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:19:50 np0005551604 systemd[1]: Reloading.
Dec  9 05:19:50 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:19:50 np0005551604 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Dec  9 05:19:50 np0005551604 systemd[1]: Reloading.
Dec  9 05:19:50 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:19:50 np0005551604 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Dec  9 05:19:51 np0005551604 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Dec  9 05:19:51 np0005551604 systemd[1]: Reloading.
Dec  9 05:19:51 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:19:51 np0005551604 systemd[1]: Listening on LVM2 poll daemon socket.
Dec  9 05:19:51 np0005551604 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec  9 05:19:51 np0005551604 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec  9 05:19:51 np0005551604 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec  9 05:21:02 np0005551604 kernel: SELinux:  Converting 2716 SID table entries...
Dec  9 05:21:02 np0005551604 kernel: SELinux:  policy capability network_peer_controls=1
Dec  9 05:21:02 np0005551604 kernel: SELinux:  policy capability open_perms=1
Dec  9 05:21:02 np0005551604 kernel: SELinux:  policy capability extended_socket_class=1
Dec  9 05:21:02 np0005551604 kernel: SELinux:  policy capability always_check_network=0
Dec  9 05:21:02 np0005551604 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  9 05:21:02 np0005551604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  9 05:21:02 np0005551604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  9 05:21:02 np0005551604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Dec  9 05:21:03 np0005551604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  9 05:21:03 np0005551604 systemd[1]: Starting man-db-cache-update.service...
Dec  9 05:21:03 np0005551604 systemd[1]: Reloading.
Dec  9 05:21:03 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:21:03 np0005551604 systemd[1]: Starting dnf makecache...
Dec  9 05:21:03 np0005551604 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  9 05:21:03 np0005551604 dnf[34583]: Failed determining last makecache time.
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-openstack-barbican-42b4c41831408a8e323 141 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 211 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-openstack-cinder-1c00d6490d88e436f26ef 209 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-python-stevedore-c4acc5639fd2329372142 196 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-python-cloudkitty-tests-tempest-2c80f8 154 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-os-refresh-config-9bfc52b5049be2d8de61 221 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 151 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-python-designate-tests-tempest-347fdbc 169 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-openstack-glance-1fd12c29b339f30fe823e 193 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 195 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-openstack-manila-3c01b7181572c95dac462 199 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-python-whitebox-neutron-tests-tempest- 159 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-openstack-octavia-ba397f07a7331190208c 160 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-openstack-watcher-c014f81a8647287f6dcc 155 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-ansible-config_template-5ccaa22121a7ff 163 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 173 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-openstack-swift-dc98a8463506ac520c469a 167 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-python-tempestconf-8515371b7cceebd4282 168 kB/s | 3.0 kB     00:00
Dec  9 05:21:03 np0005551604 dnf[34583]: delorean-openstack-heat-ui-013accbfd179753bc3f0 207 kB/s | 3.0 kB     00:00
Dec  9 05:21:04 np0005551604 dnf[34583]: CentOS Stream 9 - BaseOS                         46 kB/s | 5.4 kB     00:00
Dec  9 05:21:04 np0005551604 dnf[34583]: CentOS Stream 9 - AppStream                      64 kB/s | 5.8 kB     00:00
Dec  9 05:21:04 np0005551604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  9 05:21:04 np0005551604 systemd[1]: Finished man-db-cache-update.service.
Dec  9 05:21:04 np0005551604 systemd[1]: man-db-cache-update.service: Consumed 1.315s CPU time.
Dec  9 05:21:04 np0005551604 systemd[1]: run-r514b4e584f7f4bd79cacaa965025fb03.service: Deactivated successfully.
Dec  9 05:21:04 np0005551604 dnf[34583]: CentOS Stream 9 - CRB                            35 kB/s | 5.3 kB     00:00
Dec  9 05:21:04 np0005551604 python3.9[35485]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:21:04 np0005551604 dnf[34583]: CentOS Stream 9 - Extras packages                29 kB/s | 8.3 kB     00:00
Dec  9 05:21:04 np0005551604 dnf[34583]: dlrn-antelope-testing                           193 kB/s | 3.0 kB     00:00
Dec  9 05:21:04 np0005551604 dnf[34583]: dlrn-antelope-build-deps                        117 kB/s | 3.0 kB     00:00
Dec  9 05:21:04 np0005551604 dnf[34583]: centos9-rabbitmq                                101 kB/s | 3.0 kB     00:00
Dec  9 05:21:04 np0005551604 dnf[34583]: centos9-storage                                 109 kB/s | 3.0 kB     00:00
Dec  9 05:21:05 np0005551604 dnf[34583]: centos9-opstools                                121 kB/s | 3.0 kB     00:00
Dec  9 05:21:05 np0005551604 dnf[34583]: NFV SIG OpenvSwitch                             116 kB/s | 3.0 kB     00:00
Dec  9 05:21:05 np0005551604 dnf[34583]: repo-setup-centos-appstream                     105 kB/s | 4.4 kB     00:00
Dec  9 05:21:05 np0005551604 dnf[34583]: repo-setup-centos-baseos                        192 kB/s | 3.9 kB     00:00
Dec  9 05:21:05 np0005551604 dnf[34583]: repo-setup-centos-highavailability              111 kB/s | 3.9 kB     00:00
Dec  9 05:21:05 np0005551604 dnf[34583]: repo-setup-centos-powertools                    212 kB/s | 4.3 kB     00:00
Dec  9 05:21:05 np0005551604 dnf[34583]: Extra Packages for Enterprise Linux 9 - x86_64  162 kB/s |  28 kB     00:00
Dec  9 05:21:06 np0005551604 dnf[34583]: Metadata cache created.
Dec  9 05:21:06 np0005551604 systemd[1]: dnf-makecache.service: Deactivated successfully.
Dec  9 05:21:06 np0005551604 systemd[1]: Finished dnf makecache.
Dec  9 05:21:06 np0005551604 systemd[1]: dnf-makecache.service: Consumed 1.797s CPU time.
Dec  9 05:21:07 np0005551604 python3.9[35787]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Dec  9 05:21:08 np0005551604 python3.9[35939]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Dec  9 05:21:11 np0005551604 python3.9[36093]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:21:12 np0005551604 python3.9[36245]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Dec  9 05:21:14 np0005551604 python3.9[36397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:21:17 np0005551604 python3.9[36549]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:21:19 np0005551604 python3.9[36672]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765275674.48044-236-40635590411434/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:21:20 np0005551604 python3.9[36824]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:21:21 np0005551604 python3.9[36976]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:21:22 np0005551604 python3.9[37129]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:21:23 np0005551604 python3.9[37281]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Dec  9 05:21:23 np0005551604 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 05:21:23 np0005551604 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 05:21:23 np0005551604 python3.9[37435]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  9 05:21:24 np0005551604 python3.9[37593]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  9 05:21:25 np0005551604 python3.9[37753]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Dec  9 05:21:26 np0005551604 python3.9[37906]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  9 05:21:27 np0005551604 python3.9[38064]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Dec  9 05:21:28 np0005551604 python3.9[38216]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:21:30 np0005551604 python3.9[38369]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:21:31 np0005551604 python3.9[38521]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:21:31 np0005551604 python3.9[38644]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765275690.5965059-355-206399356458261/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:21:32 np0005551604 python3.9[38796]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:21:32 np0005551604 systemd[1]: Starting Load Kernel Modules...
Dec  9 05:21:32 np0005551604 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Dec  9 05:21:32 np0005551604 kernel: Bridge firewalling registered
Dec  9 05:21:32 np0005551604 systemd-modules-load[38800]: Inserted module 'br_netfilter'
Dec  9 05:21:32 np0005551604 systemd[1]: Finished Load Kernel Modules.
Dec  9 05:21:33 np0005551604 python3.9[38955]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:21:34 np0005551604 python3.9[39078]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765275693.0635216-378-147068434606658/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:21:35 np0005551604 python3.9[39230]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:21:38 np0005551604 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec  9 05:21:38 np0005551604 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec  9 05:21:38 np0005551604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  9 05:21:38 np0005551604 systemd[1]: Starting man-db-cache-update.service...
Dec  9 05:21:38 np0005551604 systemd[1]: Reloading.
Dec  9 05:21:38 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:21:39 np0005551604 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  9 05:21:41 np0005551604 python3.9[41301]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:21:41 np0005551604 python3.9[42347]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Dec  9 05:21:42 np0005551604 python3.9[43062]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:21:42 np0005551604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  9 05:21:42 np0005551604 systemd[1]: Finished man-db-cache-update.service.
Dec  9 05:21:42 np0005551604 systemd[1]: man-db-cache-update.service: Consumed 4.777s CPU time.
Dec  9 05:21:42 np0005551604 systemd[1]: run-rfd6922ee7b38422ba3a8de83e46ce125.service: Deactivated successfully.
Dec  9 05:21:43 np0005551604 python3.9[43430]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:21:43 np0005551604 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  9 05:21:44 np0005551604 systemd[1]: Starting Authorization Manager...
Dec  9 05:21:44 np0005551604 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  9 05:21:44 np0005551604 polkitd[43647]: Started polkitd version 0.117
Dec  9 05:21:44 np0005551604 systemd[1]: Started Authorization Manager.
Dec  9 05:21:45 np0005551604 python3.9[43818]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:21:45 np0005551604 systemd[1]: Stopping Dynamic System Tuning Daemon...
Dec  9 05:21:45 np0005551604 systemd[1]: tuned.service: Deactivated successfully.
Dec  9 05:21:45 np0005551604 systemd[1]: Stopped Dynamic System Tuning Daemon.
Dec  9 05:21:45 np0005551604 systemd[1]: Starting Dynamic System Tuning Daemon...
Dec  9 05:21:45 np0005551604 systemd[1]: Started Dynamic System Tuning Daemon.
Dec  9 05:21:46 np0005551604 python3.9[43979]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Dec  9 05:21:48 np0005551604 python3.9[44131]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:21:48 np0005551604 systemd[1]: Reloading.
Dec  9 05:21:48 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:21:49 np0005551604 python3.9[44319]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:21:49 np0005551604 systemd[1]: Reloading.
Dec  9 05:21:49 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:21:50 np0005551604 python3.9[44508]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:21:51 np0005551604 python3.9[44661]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:21:51 np0005551604 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Dec  9 05:21:52 np0005551604 python3.9[44814]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:21:54 np0005551604 python3.9[44976]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:21:54 np0005551604 python3.9[45129]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:21:55 np0005551604 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Dec  9 05:21:55 np0005551604 systemd[1]: Stopped Apply Kernel Variables.
Dec  9 05:21:55 np0005551604 systemd[1]: Stopping Apply Kernel Variables...
Dec  9 05:21:55 np0005551604 systemd[1]: Starting Apply Kernel Variables...
Dec  9 05:21:55 np0005551604 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Dec  9 05:21:55 np0005551604 systemd[1]: Finished Apply Kernel Variables.
Dec  9 05:21:55 np0005551604 systemd[1]: session-10.scope: Deactivated successfully.
Dec  9 05:21:55 np0005551604 systemd[1]: session-10.scope: Consumed 2min 16.946s CPU time.
Dec  9 05:21:55 np0005551604 systemd-logind[806]: Session 10 logged out. Waiting for processes to exit.
Dec  9 05:21:55 np0005551604 systemd-logind[806]: Removed session 10.
Dec  9 05:22:01 np0005551604 systemd-logind[806]: New session 11 of user zuul.
Dec  9 05:22:01 np0005551604 systemd[1]: Started Session 11 of User zuul.
Dec  9 05:22:02 np0005551604 python3.9[45312]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:22:04 np0005551604 python3.9[45466]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:22:05 np0005551604 python3.9[45622]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:22:06 np0005551604 python3.9[45773]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:22:07 np0005551604 python3.9[45929]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:22:09 np0005551604 python3.9[46013]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:22:11 np0005551604 python3.9[46166]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:22:12 np0005551604 python3.9[46337]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:22:13 np0005551604 python3.9[46489]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:22:13 np0005551604 systemd[1]: var-lib-containers-storage-overlay-compat826056121-merged.mount: Deactivated successfully.
Dec  9 05:22:13 np0005551604 podman[46490]: 2025-12-09 10:22:13.342446552 +0000 UTC m=+0.060836813 system refresh
Dec  9 05:22:14 np0005551604 python3.9[46652]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:22:14 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:22:15 np0005551604 python3.9[46775]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765275733.5331376-109-212200433808551/.source.json follow=False _original_basename=podman_network_config.j2 checksum=938d75e8df053001260675f2a6ecbedd13d6884b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:22:15 np0005551604 python3.9[46927]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:22:16 np0005551604 python3.9[47050]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765275735.2148852-124-102073897701465/.source.conf follow=False _original_basename=registries.conf.j2 checksum=75cbff578cac25096c07a1fc71278e69a134eb3a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:22:17 np0005551604 python3.9[47202]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:22:18 np0005551604 python3.9[47354]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:22:18 np0005551604 python3.9[47506]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:22:19 np0005551604 python3.9[47658]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:22:20 np0005551604 python3.9[47808]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:22:21 np0005551604 python3.9[47962]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  9 05:22:23 np0005551604 python3.9[48115]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  9 05:22:26 np0005551604 python3.9[48275]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  9 05:22:28 np0005551604 python3.9[48428]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  9 05:22:30 np0005551604 python3.9[48581]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  9 05:22:32 np0005551604 python3.9[48737]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  9 05:22:36 np0005551604 python3.9[48905]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  9 05:22:38 np0005551604 python3.9[49058]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  9 05:22:52 np0005551604 python3.9[49396]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  9 05:22:54 np0005551604 python3.9[49552]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:22:55 np0005551604 python3.9[49727]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:22:56 np0005551604 python3.9[49850]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1765275775.016902-272-262647550649571/.source.json _original_basename=.oogsf4cq follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:22:57 np0005551604 python3.9[50002]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  9 05:22:57 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:22:59 np0005551604 systemd[1]: var-lib-containers-storage-overlay-compat2186088384-lower\x2dmapped.mount: Deactivated successfully.
Dec  9 05:23:03 np0005551604 podman[50015]: 2025-12-09 10:23:03.640938768 +0000 UTC m=+6.241893103 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  9 05:23:03 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:03 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:03 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:04 np0005551604 python3.9[50311]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  9 05:23:04 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:17 np0005551604 podman[50323]: 2025-12-09 10:23:17.256255693 +0000 UTC m=+12.519198874 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  9 05:23:17 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:17 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:17 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:18 np0005551604 python3.9[50617]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  9 05:23:18 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:19 np0005551604 podman[50629]: 2025-12-09 10:23:19.833520819 +0000 UTC m=+1.442707292 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  9 05:23:19 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:19 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:19 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:20 np0005551604 python3.9[50863]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  9 05:23:32 np0005551604 podman[50874]: 2025-12-09 10:23:32.842862455 +0000 UTC m=+12.014954323 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  9 05:23:32 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:32 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:32 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:34 np0005551604 python3.9[51142]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  9 05:23:34 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:52 np0005551604 podman[51154]: 2025-12-09 10:23:52.118744242 +0000 UTC m=+18.036640620 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  9 05:23:52 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:52 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:52 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:52 np0005551604 python3.9[51483]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  9 05:23:54 np0005551604 podman[51496]: 2025-12-09 10:23:54.191523618 +0000 UTC m=+1.159382361 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  9 05:23:54 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:54 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:54 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:55 np0005551604 python3.9[51771]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  9 05:23:58 np0005551604 podman[51783]: 2025-12-09 10:23:58.322892877 +0000 UTC m=+3.140946801 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  9 05:23:58 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:58 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:58 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:23:59 np0005551604 python3.9[52039]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Dec  9 05:23:59 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:24:06 np0005551604 podman[52051]: 2025-12-09 10:24:06.331515471 +0000 UTC m=+6.931333623 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  9 05:24:06 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:24:06 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:24:06 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:24:07 np0005551604 systemd-logind[806]: Session 11 logged out. Waiting for processes to exit.
Dec  9 05:24:07 np0005551604 systemd[1]: session-11.scope: Deactivated successfully.
Dec  9 05:24:07 np0005551604 systemd[1]: session-11.scope: Consumed 2min 25.907s CPU time.
Dec  9 05:24:07 np0005551604 systemd-logind[806]: Removed session 11.
Dec  9 05:24:12 np0005551604 systemd-logind[806]: New session 12 of user zuul.
Dec  9 05:24:12 np0005551604 systemd[1]: Started Session 12 of User zuul.
Dec  9 05:24:13 np0005551604 python3.9[52452]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:24:14 np0005551604 python3.9[52608]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Dec  9 05:24:15 np0005551604 python3.9[52761]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  9 05:24:16 np0005551604 python3.9[52919]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  9 05:24:17 np0005551604 python3.9[53079]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:24:18 np0005551604 python3.9[53163]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  9 05:24:20 np0005551604 python3.9[53325]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:24:36 np0005551604 kernel: SELinux:  Converting 2731 SID table entries...
Dec  9 05:24:36 np0005551604 kernel: SELinux:  policy capability network_peer_controls=1
Dec  9 05:24:36 np0005551604 kernel: SELinux:  policy capability open_perms=1
Dec  9 05:24:36 np0005551604 kernel: SELinux:  policy capability extended_socket_class=1
Dec  9 05:24:36 np0005551604 kernel: SELinux:  policy capability always_check_network=0
Dec  9 05:24:36 np0005551604 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  9 05:24:36 np0005551604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  9 05:24:36 np0005551604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  9 05:24:37 np0005551604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Dec  9 05:24:37 np0005551604 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Dec  9 05:24:38 np0005551604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  9 05:24:38 np0005551604 systemd[1]: Starting man-db-cache-update.service...
Dec  9 05:24:38 np0005551604 systemd[1]: Reloading.
Dec  9 05:24:39 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:24:39 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:24:39 np0005551604 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  9 05:24:42 np0005551604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  9 05:24:42 np0005551604 systemd[1]: Finished man-db-cache-update.service.
Dec  9 05:24:42 np0005551604 systemd[1]: run-r531628cc023549a280d2755f8dd0ecd8.service: Deactivated successfully.
Dec  9 05:24:44 np0005551604 python3.9[54423]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  9 05:24:44 np0005551604 systemd[1]: Reloading.
Dec  9 05:24:45 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:24:45 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:24:45 np0005551604 systemd[1]: Starting Open vSwitch Database Unit...
Dec  9 05:24:45 np0005551604 chown[54465]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Dec  9 05:24:45 np0005551604 ovs-ctl[54470]: /etc/openvswitch/conf.db does not exist ... (warning).
Dec  9 05:24:46 np0005551604 ovs-ctl[54470]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Dec  9 05:24:46 np0005551604 ovs-ctl[54470]: Starting ovsdb-server [  OK  ]
Dec  9 05:24:46 np0005551604 ovs-vsctl[54519]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Dec  9 05:24:46 np0005551604 ovs-vsctl[54539]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"9ec27861-bbe8-48fb-b30f-25b967e1609e\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Dec  9 05:24:46 np0005551604 ovs-ctl[54470]: Configuring Open vSwitch system IDs [  OK  ]
Dec  9 05:24:46 np0005551604 ovs-vsctl[54545]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  9 05:24:46 np0005551604 ovs-ctl[54470]: Enabling remote OVSDB managers [  OK  ]
Dec  9 05:24:46 np0005551604 systemd[1]: Started Open vSwitch Database Unit.
Dec  9 05:24:46 np0005551604 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Dec  9 05:24:46 np0005551604 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Dec  9 05:24:46 np0005551604 systemd[1]: Starting Open vSwitch Forwarding Unit...
Dec  9 05:24:46 np0005551604 kernel: openvswitch: Open vSwitch switching datapath
Dec  9 05:24:46 np0005551604 ovs-ctl[54590]: Inserting openvswitch module [  OK  ]
Dec  9 05:24:46 np0005551604 ovs-ctl[54559]: Starting ovs-vswitchd [  OK  ]
Dec  9 05:24:46 np0005551604 ovs-vsctl[54607]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Dec  9 05:24:46 np0005551604 ovs-ctl[54559]: Enabling remote OVSDB managers [  OK  ]
Dec  9 05:24:46 np0005551604 systemd[1]: Started Open vSwitch Forwarding Unit.
Dec  9 05:24:46 np0005551604 systemd[1]: Starting Open vSwitch...
Dec  9 05:24:46 np0005551604 systemd[1]: Finished Open vSwitch.
Dec  9 05:24:47 np0005551604 python3.9[54759]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:24:48 np0005551604 python3.9[54911]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Dec  9 05:24:49 np0005551604 kernel: SELinux:  Converting 2745 SID table entries...
Dec  9 05:24:49 np0005551604 kernel: SELinux:  policy capability network_peer_controls=1
Dec  9 05:24:49 np0005551604 kernel: SELinux:  policy capability open_perms=1
Dec  9 05:24:49 np0005551604 kernel: SELinux:  policy capability extended_socket_class=1
Dec  9 05:24:49 np0005551604 kernel: SELinux:  policy capability always_check_network=0
Dec  9 05:24:49 np0005551604 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  9 05:24:49 np0005551604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  9 05:24:49 np0005551604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  9 05:24:50 np0005551604 python3.9[55066]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:24:51 np0005551604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Dec  9 05:24:51 np0005551604 python3.9[55224]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:24:53 np0005551604 python3.9[55377]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:24:55 np0005551604 python3.9[55664]: ansible-ansible.builtin.file Invoked with mode=0750 path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  9 05:24:55 np0005551604 python3.9[55814]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:24:56 np0005551604 python3.9[55968]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:24:59 np0005551604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  9 05:24:59 np0005551604 systemd[1]: Starting man-db-cache-update.service...
Dec  9 05:24:59 np0005551604 systemd[1]: Reloading.
Dec  9 05:24:59 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:24:59 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:24:59 np0005551604 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  9 05:25:01 np0005551604 python3.9[56284]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:25:01 np0005551604 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Dec  9 05:25:01 np0005551604 systemd[1]: Stopped Network Manager Wait Online.
Dec  9 05:25:01 np0005551604 systemd[1]: Stopping Network Manager Wait Online...
Dec  9 05:25:01 np0005551604 systemd[1]: Stopping Network Manager...
Dec  9 05:25:01 np0005551604 NetworkManager[7184]: <info>  [1765275901.8110] caught SIGTERM, shutting down normally.
Dec  9 05:25:01 np0005551604 NetworkManager[7184]: <info>  [1765275901.8128] dhcp4 (eth0): canceled DHCP transaction
Dec  9 05:25:01 np0005551604 NetworkManager[7184]: <info>  [1765275901.8129] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  9 05:25:01 np0005551604 NetworkManager[7184]: <info>  [1765275901.8129] dhcp4 (eth0): state changed no lease
Dec  9 05:25:01 np0005551604 NetworkManager[7184]: <info>  [1765275901.8134] manager: NetworkManager state is now CONNECTED_SITE
Dec  9 05:25:01 np0005551604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  9 05:25:01 np0005551604 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  9 05:25:02 np0005551604 NetworkManager[7184]: <info>  [1765275902.2299] exiting (success)
Dec  9 05:25:02 np0005551604 systemd[1]: NetworkManager.service: Deactivated successfully.
Dec  9 05:25:02 np0005551604 systemd[1]: Stopped Network Manager.
Dec  9 05:25:02 np0005551604 systemd[1]: NetworkManager.service: Consumed 15.787s CPU time, 4.1M memory peak, read 0B from disk, written 31.5K to disk.
Dec  9 05:25:02 np0005551604 systemd[1]: Starting Network Manager...
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.2932] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:f43569a1-1096-4e67-91b2-bda287c55398)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.2934] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.2983] manager[0x55c704c6f000]: monitoring kernel firmware directory '/lib/firmware'.
Dec  9 05:25:02 np0005551604 systemd[1]: Starting Hostname Service...
Dec  9 05:25:02 np0005551604 systemd[1]: Started Hostname Service.
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3770] hostname: hostname: using hostnamed
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3771] hostname: static hostname changed from (none) to "compute-0"
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3775] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3779] manager[0x55c704c6f000]: rfkill: Wi-Fi hardware radio set enabled
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3779] manager[0x55c704c6f000]: rfkill: WWAN hardware radio set enabled
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3802] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3811] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3811] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3812] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3812] manager: Networking is enabled by state file
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3814] settings: Loaded settings plugin: keyfile (internal)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3818] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3836] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3843] dhcp: init: Using DHCP client 'internal'
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3845] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3849] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3853] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3858] device (lo): Activation: starting connection 'lo' (4d2460cc-3851-4697-811d-bb6085f75db6)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3863] device (eth0): carrier: link connected
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3866] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3870] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3870] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3874] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3878] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3882] device (eth1): carrier: link connected
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3885] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3890] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f) (indicated)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3890] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3895] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3900] device (eth1): Activation: starting connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f)
Dec  9 05:25:02 np0005551604 systemd[1]: Started Network Manager.
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3904] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3909] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3922] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3923] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3926] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3928] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3930] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3932] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3943] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3949] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3952] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3958] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3970] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3977] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3979] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3983] device (lo): Activation: successful, device activated.
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3989] dhcp4 (eth0): state changed new lease, address=38.102.83.201
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.3995] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Dec  9 05:25:02 np0005551604 systemd[1]: Starting Network Manager Wait Online...
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.9701] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.9729] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.9737] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.9741] manager: NetworkManager state is now CONNECTED_LOCAL
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.9745] device (eth1): Activation: successful, device activated.
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.9782] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.9785] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.9790] manager: NetworkManager state is now CONNECTED_SITE
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.9794] device (eth0): Activation: successful, device activated.
Dec  9 05:25:02 np0005551604 NetworkManager[56302]: <info>  [1765275902.9800] manager: NetworkManager state is now CONNECTED_GLOBAL
Dec  9 05:25:03 np0005551604 NetworkManager[56302]: <info>  [1765275903.0889] manager: startup complete
Dec  9 05:25:03 np0005551604 systemd[1]: Finished Network Manager Wait Online.
Dec  9 05:25:03 np0005551604 python3.9[56479]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:25:03 np0005551604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  9 05:25:03 np0005551604 systemd[1]: Finished man-db-cache-update.service.
Dec  9 05:25:03 np0005551604 systemd[1]: run-r479ddfe5a6484992afedacf8ad2bd388.service: Deactivated successfully.
Dec  9 05:25:13 np0005551604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  9 05:25:14 np0005551604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  9 05:25:14 np0005551604 systemd[1]: Starting man-db-cache-update.service...
Dec  9 05:25:14 np0005551604 systemd[1]: Reloading.
Dec  9 05:25:14 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:25:14 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:25:15 np0005551604 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  9 05:25:15 np0005551604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  9 05:25:15 np0005551604 systemd[1]: Finished man-db-cache-update.service.
Dec  9 05:25:15 np0005551604 systemd[1]: run-rb7bd3c19306b4ca98676e6f6b27f9e43.service: Deactivated successfully.
Dec  9 05:25:16 np0005551604 python3.9[56970]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:25:17 np0005551604 python3.9[57122]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:25:18 np0005551604 python3.9[57276]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:25:19 np0005551604 python3.9[57428]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:25:19 np0005551604 python3.9[57580]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:25:20 np0005551604 python3.9[57732]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:25:21 np0005551604 python3.9[57884]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:25:22 np0005551604 python3.9[58007]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275920.5907054-229-168831412801827/.source _original_basename=.bkg_i5lg follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:25:22 np0005551604 python3.9[58159]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:25:23 np0005551604 python3.9[58311]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Dec  9 05:25:24 np0005551604 python3.9[58463]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:25:26 np0005551604 python3.9[58890]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Dec  9 05:25:28 np0005551604 ansible-async_wrapper.py[59065]: Invoked with j839379842523 300 /home/zuul/.ansible/tmp/ansible-tmp-1765275926.9524686-295-227564661618172/AnsiballZ_edpm_os_net_config.py _
Dec  9 05:25:28 np0005551604 ansible-async_wrapper.py[59068]: Starting module and watcher
Dec  9 05:25:28 np0005551604 ansible-async_wrapper.py[59068]: Start watching 59069 (300)
Dec  9 05:25:28 np0005551604 ansible-async_wrapper.py[59069]: Start module (59069)
Dec  9 05:25:28 np0005551604 ansible-async_wrapper.py[59065]: Return async_wrapper task started.
Dec  9 05:25:28 np0005551604 python3.9[59070]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Dec  9 05:25:29 np0005551604 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Dec  9 05:25:29 np0005551604 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Dec  9 05:25:29 np0005551604 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Dec  9 05:25:29 np0005551604 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Dec  9 05:25:29 np0005551604 kernel: cfg80211: failed to load regulatory.db
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2193] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2220] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2841] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2842] audit: op="connection-add" uuid="283dc479-3dc8-4c77-a52b-aa6ae2f291b4" name="br-ex-br" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2858] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2859] audit: op="connection-add" uuid="4e610b41-b7c5-45a1-bd61-f214c02ef3cf" name="br-ex-port" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2870] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2870] audit: op="connection-add" uuid="674fe76a-bc52-4f3f-874a-b130701b2895" name="eth1-port" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2881] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2882] audit: op="connection-add" uuid="4ce4554a-760c-4435-988b-77746bb27c13" name="vlan20-port" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2891] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2892] audit: op="connection-add" uuid="8aefaa47-7de4-43e6-b444-ee39152781f4" name="vlan21-port" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2902] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2903] audit: op="connection-add" uuid="2e90ec5a-db3a-4276-b632-c7580998392e" name="vlan22-port" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2923] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="ipv6.method,ipv6.dhcp-timeout,ipv6.addr-gen-mode,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority,connection.timestamp" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2942] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.2944] audit: op="connection-add" uuid="30be0f96-effb-4c5b-9c38-7df15607b059" name="br-ex-if" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3011] audit: op="connection-update" uuid="6b6a22e5-bf6a-510d-869a-e83c7a7cb57f" name="ci-private-network" args="ovs-external-ids.data,ipv6.method,ipv6.routes,ipv6.dns,ipv6.addr-gen-mode,ipv6.routing-rules,ipv6.addresses,ovs-interface.type,ipv4.method,ipv4.routes,ipv4.dns,ipv4.routing-rules,ipv4.addresses,ipv4.never-default,connection.port-type,connection.timestamp,connection.controller,connection.slave-type,connection.master" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3029] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3030] audit: op="connection-add" uuid="5c4dfb5e-4fe6-4043-878d-08e89025bbc8" name="vlan20-if" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3047] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3048] audit: op="connection-add" uuid="733f9516-1b97-44d1-8a52-f240114e899e" name="vlan21-if" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3066] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3068] audit: op="connection-add" uuid="57931a83-d7e9-40c7-b5d5-0350f3cd0d8a" name="vlan22-if" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3083] audit: op="connection-delete" uuid="c9d71888-3b72-38d5-8bab-6a45e2651a1e" name="Wired connection 1" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3097] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <warn>  [1765275930.3100] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3105] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3109] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (283dc479-3dc8-4c77-a52b-aa6ae2f291b4)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3109] audit: op="connection-activate" uuid="283dc479-3dc8-4c77-a52b-aa6ae2f291b4" name="br-ex-br" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3110] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <warn>  [1765275930.3111] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3115] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3119] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (4e610b41-b7c5-45a1-bd61-f214c02ef3cf)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3120] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <warn>  [1765275930.3120] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3124] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3128] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (674fe76a-bc52-4f3f-874a-b130701b2895)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3129] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <warn>  [1765275930.3130] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3134] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3138] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (4ce4554a-760c-4435-988b-77746bb27c13)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3139] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <warn>  [1765275930.3142] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3146] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3150] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (8aefaa47-7de4-43e6-b444-ee39152781f4)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3151] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <warn>  [1765275930.3152] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3157] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3161] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (2e90ec5a-db3a-4276-b632-c7580998392e)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3161] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3163] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3165] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3170] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <warn>  [1765275930.3171] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3174] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3178] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (30be0f96-effb-4c5b-9c38-7df15607b059)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3178] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3181] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3183] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3184] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3186] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3195] device (eth1): disconnecting for new activation request.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3195] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3209] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3211] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3213] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3217] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <warn>  [1765275930.3218] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3222] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3226] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (5c4dfb5e-4fe6-4043-878d-08e89025bbc8)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3228] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3231] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3234] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3236] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3240] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <warn>  [1765275930.3241] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3244] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3249] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (733f9516-1b97-44d1-8a52-f240114e899e)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3250] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3252] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3255] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3257] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3260] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <warn>  [1765275930.3262] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3265] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3269] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (57931a83-d7e9-40c7-b5d5-0350f3cd0d8a)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3271] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3273] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3276] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3277] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3280] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3291] audit: op="device-reapply" interface="eth0" ifindex=2 args="ipv6.method,ipv6.addr-gen-mode,802-3-ethernet.mtu,ipv4.dhcp-timeout,ipv4.dhcp-client-id,connection.autoconnect-priority" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3293] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3296] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3298] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3304] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3308] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3312] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3315] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3318] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3322] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3326] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3330] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3332] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3336] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3341] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3344] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3346] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3351] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 kernel: ovs-system: entered promiscuous mode
Dec  9 05:25:30 np0005551604 systemd-udevd[59077]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3356] dhcp4 (eth0): canceled DHCP transaction
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3356] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3356] dhcp4 (eth0): state changed no lease
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3358] dhcp4 (eth0): activation: beginning transaction (no timeout)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3366] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3370] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59071 uid=0 result="fail" reason="Device is not activated"
Dec  9 05:25:30 np0005551604 systemd[1]: Starting Network Manager Script Dispatcher Service...
Dec  9 05:25:30 np0005551604 kernel: Timeout policy base is empty
Dec  9 05:25:30 np0005551604 systemd[1]: Started Network Manager Script Dispatcher Service.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3851] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3857] dhcp4 (eth0): state changed new lease, address=38.102.83.201
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3864] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3875] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.3918] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  9 05:25:30 np0005551604 kernel: br-ex: entered promiscuous mode
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4124] device (eth1): Activation: starting connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4130] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4132] device (eth1): state change: disconnected -> deactivating (reason 'new-activation', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4138] device (eth1): disconnecting for new activation request.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4140] audit: op="connection-activate" uuid="6b6a22e5-bf6a-510d-869a-e83c7a7cb57f" name="ci-private-network" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4141] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4148] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4152] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4156] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4160] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4168] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4170] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4172] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4174] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4179] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4182] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4188] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4192] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4196] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 kernel: vlan22: entered promiscuous mode
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4201] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4205] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 systemd-udevd[59075]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4209] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4233] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59071 uid=0 result="success"
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4243] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4244] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4251] device (eth1): Activation: starting connection 'ci-private-network' (6b6a22e5-bf6a-510d-869a-e83c7a7cb57f)
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4259] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4263] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4275] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 kernel: vlan20: entered promiscuous mode
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4309] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4312] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4320] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4355] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Dec  9 05:25:30 np0005551604 kernel: vlan21: entered promiscuous mode
Dec  9 05:25:30 np0005551604 systemd-udevd[59076]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4359] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4360] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4363] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4366] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4370] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4373] device (eth1): Activation: successful, device activated.
Dec  9 05:25:30 np0005551604 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4396] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4405] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4421] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4428] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4431] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4434] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4472] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4476] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4480] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4484] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4495] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4529] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4530] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Dec  9 05:25:30 np0005551604 NetworkManager[56302]: <info>  [1765275930.4534] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Dec  9 05:25:31 np0005551604 NetworkManager[56302]: <info>  [1765275931.5812] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59071 uid=0 result="success"
Dec  9 05:25:31 np0005551604 NetworkManager[56302]: <info>  [1765275931.7383] checkpoint[0x55c704c44950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Dec  9 05:25:31 np0005551604 NetworkManager[56302]: <info>  [1765275931.7385] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59071 uid=0 result="success"
Dec  9 05:25:31 np0005551604 python3.9[59409]: ansible-ansible.legacy.async_status Invoked with jid=j839379842523.59065 mode=status _async_dir=/root/.ansible_async
Dec  9 05:25:31 np0005551604 NetworkManager[56302]: <info>  [1765275931.9931] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59071 uid=0 result="success"
Dec  9 05:25:31 np0005551604 NetworkManager[56302]: <info>  [1765275931.9941] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59071 uid=0 result="success"
Dec  9 05:25:32 np0005551604 NetworkManager[56302]: <info>  [1765275932.1966] audit: op="networking-control" arg="global-dns-configuration" pid=59071 uid=0 result="success"
Dec  9 05:25:32 np0005551604 NetworkManager[56302]: <info>  [1765275932.2219] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Dec  9 05:25:32 np0005551604 NetworkManager[56302]: <info>  [1765275932.2253] audit: op="networking-control" arg="global-dns-configuration" pid=59071 uid=0 result="success"
Dec  9 05:25:32 np0005551604 NetworkManager[56302]: <info>  [1765275932.2280] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59071 uid=0 result="success"
Dec  9 05:25:32 np0005551604 NetworkManager[56302]: <info>  [1765275932.3647] checkpoint[0x55c704c44a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Dec  9 05:25:32 np0005551604 NetworkManager[56302]: <info>  [1765275932.3650] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59071 uid=0 result="success"
Dec  9 05:25:32 np0005551604 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Dec  9 05:25:32 np0005551604 ansible-async_wrapper.py[59069]: Module complete (59069)
Dec  9 05:25:33 np0005551604 ansible-async_wrapper.py[59068]: Done in kid B.
Dec  9 05:25:35 np0005551604 python3.9[59519]: ansible-ansible.legacy.async_status Invoked with jid=j839379842523.59065 mode=status _async_dir=/root/.ansible_async
Dec  9 05:25:35 np0005551604 python3.9[59619]: ansible-ansible.legacy.async_status Invoked with jid=j839379842523.59065 mode=cleanup _async_dir=/root/.ansible_async
Dec  9 05:25:36 np0005551604 python3.9[59771]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:25:37 np0005551604 python3.9[59894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275936.1900833-322-230654113739781/.source.returncode _original_basename=.vy4ljc1c follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:25:38 np0005551604 python3.9[60046]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:25:38 np0005551604 python3.9[60169]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275937.5417066-338-27347400406451/.source.cfg _original_basename=.p9m85jgh follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:25:39 np0005551604 python3.9[60322]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:25:39 np0005551604 systemd[1]: Reloading Network Manager...
Dec  9 05:25:39 np0005551604 NetworkManager[56302]: <info>  [1765275939.3571] audit: op="reload" arg="0" pid=60326 uid=0 result="success"
Dec  9 05:25:39 np0005551604 NetworkManager[56302]: <info>  [1765275939.3581] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Dec  9 05:25:39 np0005551604 systemd[1]: Reloaded Network Manager.
Dec  9 05:25:39 np0005551604 systemd-logind[806]: Session 12 logged out. Waiting for processes to exit.
Dec  9 05:25:39 np0005551604 systemd[1]: session-12.scope: Deactivated successfully.
Dec  9 05:25:39 np0005551604 systemd[1]: session-12.scope: Consumed 50.907s CPU time.
Dec  9 05:25:39 np0005551604 systemd-logind[806]: Removed session 12.
Dec  9 05:25:45 np0005551604 systemd-logind[806]: New session 13 of user zuul.
Dec  9 05:25:45 np0005551604 systemd[1]: Started Session 13 of User zuul.
Dec  9 05:25:46 np0005551604 python3.9[60510]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:25:47 np0005551604 python3.9[60664]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:25:48 np0005551604 python3.9[60853]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:25:48 np0005551604 systemd[1]: session-13.scope: Deactivated successfully.
Dec  9 05:25:48 np0005551604 systemd[1]: session-13.scope: Consumed 2.364s CPU time.
Dec  9 05:25:48 np0005551604 systemd-logind[806]: Session 13 logged out. Waiting for processes to exit.
Dec  9 05:25:48 np0005551604 systemd-logind[806]: Removed session 13.
Dec  9 05:25:49 np0005551604 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Dec  9 05:25:54 np0005551604 systemd-logind[806]: New session 14 of user zuul.
Dec  9 05:25:54 np0005551604 systemd[1]: Started Session 14 of User zuul.
Dec  9 05:25:55 np0005551604 python3.9[61036]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:25:56 np0005551604 python3.9[61190]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:25:58 np0005551604 python3.9[61346]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:25:58 np0005551604 python3.9[61431]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:26:01 np0005551604 python3.9[61584]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:26:02 np0005551604 python3.9[61776]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:02 np0005551604 python3.9[61928]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:26:02 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:26:03 np0005551604 python3.9[62090]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:04 np0005551604 python3.9[62168]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:04 np0005551604 python3.9[62320]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:05 np0005551604 python3.9[62398]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:26:06 np0005551604 python3.9[62550]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:26:06 np0005551604 python3.9[62702]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:26:07 np0005551604 python3.9[62854]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:26:08 np0005551604 python3.9[63006]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:26:08 np0005551604 python3.9[63158]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:26:11 np0005551604 python3.9[63313]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:26:12 np0005551604 python3.9[63467]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:26:13 np0005551604 python3.9[63619]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:26:13 np0005551604 python3.9[63771]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:26:14 np0005551604 python3.9[63924]: ansible-service_facts Invoked
Dec  9 05:26:14 np0005551604 network[63941]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  9 05:26:14 np0005551604 network[63942]: 'network-scripts' will be removed from distribution in near future.
Dec  9 05:26:14 np0005551604 network[63943]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  9 05:26:20 np0005551604 python3.9[64395]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:26:22 np0005551604 python3.9[64548]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Dec  9 05:26:23 np0005551604 python3.9[64700]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:24 np0005551604 python3.9[64825]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275983.408193-232-221091931822936/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:25 np0005551604 python3.9[64979]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:26 np0005551604 python3.9[65104]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765275984.9192743-247-70682147291956/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:27 np0005551604 python3.9[65258]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:28 np0005551604 python3.9[65412]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:26:30 np0005551604 python3.9[65496]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:26:31 np0005551604 python3.9[65650]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:26:33 np0005551604 python3.9[65734]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:26:33 np0005551604 chronyd[784]: chronyd exiting
Dec  9 05:26:33 np0005551604 systemd[1]: Stopping NTP client/server...
Dec  9 05:26:33 np0005551604 systemd[1]: chronyd.service: Deactivated successfully.
Dec  9 05:26:33 np0005551604 systemd[1]: Stopped NTP client/server.
Dec  9 05:26:33 np0005551604 systemd[1]: Starting NTP client/server...
Dec  9 05:26:33 np0005551604 chronyd[65742]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Dec  9 05:26:33 np0005551604 chronyd[65742]: Frequency -26.722 +/- 0.432 ppm read from /var/lib/chrony/drift
Dec  9 05:26:33 np0005551604 chronyd[65742]: Loaded seccomp filter (level 2)
Dec  9 05:26:33 np0005551604 systemd[1]: Started NTP client/server.
Dec  9 05:26:33 np0005551604 systemd[1]: session-14.scope: Deactivated successfully.
Dec  9 05:26:33 np0005551604 systemd[1]: session-14.scope: Consumed 25.324s CPU time.
Dec  9 05:26:33 np0005551604 systemd-logind[806]: Session 14 logged out. Waiting for processes to exit.
Dec  9 05:26:33 np0005551604 systemd-logind[806]: Removed session 14.
Dec  9 05:26:39 np0005551604 systemd-logind[806]: New session 15 of user zuul.
Dec  9 05:26:39 np0005551604 systemd[1]: Started Session 15 of User zuul.
Dec  9 05:26:40 np0005551604 python3.9[65921]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:26:41 np0005551604 python3.9[66077]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:42 np0005551604 python3.9[66252]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:42 np0005551604 python3.9[66330]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.umz3syzo recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:43 np0005551604 python3.9[66482]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:44 np0005551604 python3.9[66605]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276002.892835-61-175142638633980/.source _original_basename=.bl53kjju follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:45 np0005551604 python3.9[66757]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:26:46 np0005551604 python3.9[66911]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:46 np0005551604 python3.9[67034]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276005.5177836-85-4183505911027/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:26:47 np0005551604 python3.9[67188]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:47 np0005551604 python3.9[67311]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276006.8393881-85-195841845680834/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:26:48 np0005551604 python3.9[67463]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:49 np0005551604 python3.9[67615]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:50 np0005551604 python3.9[67738]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276009.0655894-122-161973645946222/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:50 np0005551604 python3.9[67890]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:51 np0005551604 python3.9[68013]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276010.248644-137-197946848426022/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:52 np0005551604 python3.9[68165]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:26:52 np0005551604 systemd[1]: Reloading.
Dec  9 05:26:52 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:26:52 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:26:52 np0005551604 systemd[1]: Reloading.
Dec  9 05:26:52 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:26:52 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:26:52 np0005551604 systemd[1]: Starting EDPM Container Shutdown...
Dec  9 05:26:52 np0005551604 systemd[1]: Finished EDPM Container Shutdown.
Dec  9 05:26:53 np0005551604 python3.9[68392]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:54 np0005551604 python3.9[68515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276013.0977483-160-193175974536045/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:54 np0005551604 python3.9[68667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:26:55 np0005551604 python3.9[68790]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276014.360563-175-144356894532027/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:26:56 np0005551604 python3.9[68942]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:26:56 np0005551604 systemd[1]: Reloading.
Dec  9 05:26:56 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:26:56 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:26:56 np0005551604 systemd[1]: Reloading.
Dec  9 05:26:56 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:26:56 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:26:56 np0005551604 systemd[1]: Starting Create netns directory...
Dec  9 05:26:56 np0005551604 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  9 05:26:56 np0005551604 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  9 05:26:56 np0005551604 systemd[1]: Finished Create netns directory.
Dec  9 05:26:57 np0005551604 python3.9[69167]: ansible-ansible.builtin.service_facts Invoked
Dec  9 05:26:57 np0005551604 network[69184]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  9 05:26:57 np0005551604 network[69185]: 'network-scripts' will be removed from distribution in near future.
Dec  9 05:26:57 np0005551604 network[69186]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  9 05:27:01 np0005551604 python3.9[69448]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:27:01 np0005551604 systemd[1]: Reloading.
Dec  9 05:27:01 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:27:01 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:27:02 np0005551604 systemd[1]: Stopping IPv4 firewall with iptables...
Dec  9 05:27:02 np0005551604 iptables.init[69487]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Dec  9 05:27:02 np0005551604 iptables.init[69487]: iptables: Flushing firewall rules: [  OK  ]
Dec  9 05:27:02 np0005551604 systemd[1]: iptables.service: Deactivated successfully.
Dec  9 05:27:02 np0005551604 systemd[1]: Stopped IPv4 firewall with iptables.
Dec  9 05:27:03 np0005551604 python3.9[69683]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:27:04 np0005551604 python3.9[69837]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:27:04 np0005551604 systemd[1]: Reloading.
Dec  9 05:27:04 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:27:04 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:27:04 np0005551604 systemd[1]: Starting Netfilter Tables...
Dec  9 05:27:04 np0005551604 systemd[1]: Finished Netfilter Tables.
Dec  9 05:27:05 np0005551604 python3.9[70030]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:27:06 np0005551604 python3.9[70183]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:27:06 np0005551604 python3.9[70308]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276025.7315059-244-196412974802121/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:07 np0005551604 python3.9[70461]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:27:07 np0005551604 systemd[1]: Reloading OpenSSH server daemon...
Dec  9 05:27:07 np0005551604 systemd[1]: Reloaded OpenSSH server daemon.
Dec  9 05:27:08 np0005551604 python3.9[70617]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:09 np0005551604 python3.9[70769]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:27:09 np0005551604 python3.9[70892]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276028.680314-275-277798178836321/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:10 np0005551604 python3.9[71044]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Dec  9 05:27:10 np0005551604 systemd[1]: Starting Time & Date Service...
Dec  9 05:27:10 np0005551604 systemd[1]: Started Time & Date Service.
Dec  9 05:27:11 np0005551604 python3.9[71200]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:12 np0005551604 python3.9[71352]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:27:13 np0005551604 python3.9[71475]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276031.9062324-310-182780049426286/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:13 np0005551604 python3.9[71627]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:27:14 np0005551604 python3.9[71750]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276033.3221977-325-191569428257163/.source.yaml _original_basename=.v24a2u3p follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:15 np0005551604 python3.9[71902]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:27:15 np0005551604 python3.9[72025]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276034.5696132-340-267879395588508/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:16 np0005551604 python3.9[72177]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:27:17 np0005551604 python3.9[72330]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:27:17 np0005551604 python3[72483]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  9 05:27:18 np0005551604 python3.9[72635]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:27:19 np0005551604 python3.9[72758]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276038.1512318-379-132021651288435/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:19 np0005551604 python3.9[72910]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:27:20 np0005551604 python3.9[73033]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276039.3645303-394-224266804531636/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:21 np0005551604 python3.9[73185]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:27:21 np0005551604 python3.9[73310]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276040.6447146-409-57894465429923/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:22 np0005551604 python3.9[73462]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:27:23 np0005551604 python3.9[73585]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276042.0588565-424-261765440124856/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:23 np0005551604 python3.9[73737]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:27:24 np0005551604 python3.9[73860]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276043.293206-439-259977886276396/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:25 np0005551604 python3.9[74012]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:25 np0005551604 python3.9[74164]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:27:26 np0005551604 python3.9[74323]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:27 np0005551604 python3.9[74476]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:28 np0005551604 python3.9[74628]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:29 np0005551604 python3.9[74780]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  9 05:27:29 np0005551604 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 05:27:29 np0005551604 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 05:27:29 np0005551604 python3.9[74934]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Dec  9 05:27:30 np0005551604 systemd[1]: session-15.scope: Deactivated successfully.
Dec  9 05:27:30 np0005551604 systemd[1]: session-15.scope: Consumed 37.602s CPU time.
Dec  9 05:27:30 np0005551604 systemd-logind[806]: Session 15 logged out. Waiting for processes to exit.
Dec  9 05:27:30 np0005551604 systemd-logind[806]: Removed session 15.
Dec  9 05:27:35 np0005551604 systemd-logind[806]: New session 16 of user zuul.
Dec  9 05:27:35 np0005551604 systemd[1]: Started Session 16 of User zuul.
Dec  9 05:27:36 np0005551604 python3.9[75115]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Dec  9 05:27:37 np0005551604 python3.9[75267]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:27:38 np0005551604 python3.9[75419]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:27:39 np0005551604 python3.9[75571]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDv43LwrvO/gTL5Xi96EfG8s5Ayv191kgICPs2cVCBwk4tOW9h/Dv7UMFE4J7XWWq1TFTMQqpThcjvmTS5Xuo7AdEiokw1vLIZsf0vjtk7OvI7Yti49pI/u0vh+G4vx8o7KVujYLEewkVontw/WbNQQN+SwSMPRQ81nPFesWTO2JFSTjqdWZHIbI9rkYDVkKj13u8yq0jMW5rgcs6fxi8w3oGr1u+GGsoUyVflWBxXFdVgsTzVD8MfpdJzlj/RP703OORL/hThWPR4rJbHAnViikRKxtRtaapgWnX6/LxtCN4ABljRaTJTzt7Qq3mPhwzBFUwYhRrZFXAmqbgu4ex2WozgNWaExPfY1OoiqRwUnkf+SzP4huNSGGATK6z7g+GgokoCiygdpulhHWKbbsZWW9fgkg+MPZG1co20bbVqWHpc/RtJh/mxB9vyUFkMT+FbjGdJgqU32U/O1jdaq4BMpGiX3cPceWjDn7WD/K7hPe8VuMOOMpuFzc6gHecvPHIU=#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILqagmFyoIqRaVbKtHBiXCBRn68yqvKxDUudOdMGI1Vg#012compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKSHg2oYvBPxog5Vh8F78eEolpPsw5tANvlmr58EvVDdki2zd2UmC3f7nz98GeQTYqAJMp0MOwA9Esm0RnH8p0s=#012 create=True mode=0644 path=/tmp/ansible.n3ywtrf7 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:39 np0005551604 python3.9[75723]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.n3ywtrf7' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:27:40 np0005551604 python3.9[75877]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.n3ywtrf7 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:41 np0005551604 systemd[1]: systemd-timedated.service: Deactivated successfully.
Dec  9 05:27:41 np0005551604 systemd[1]: session-16.scope: Deactivated successfully.
Dec  9 05:27:41 np0005551604 systemd[1]: session-16.scope: Consumed 3.320s CPU time.
Dec  9 05:27:41 np0005551604 systemd-logind[806]: Session 16 logged out. Waiting for processes to exit.
Dec  9 05:27:41 np0005551604 systemd-logind[806]: Removed session 16.
Dec  9 05:27:46 np0005551604 systemd-logind[806]: New session 17 of user zuul.
Dec  9 05:27:46 np0005551604 systemd[1]: Started Session 17 of User zuul.
Dec  9 05:27:47 np0005551604 python3.9[76058]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:27:49 np0005551604 python3.9[76214]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  9 05:27:49 np0005551604 python3.9[76368]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:27:51 np0005551604 python3.9[76521]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:27:51 np0005551604 python3.9[76674]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:27:52 np0005551604 python3.9[76828]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:27:53 np0005551604 python3.9[76983]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:27:53 np0005551604 systemd[1]: session-17.scope: Deactivated successfully.
Dec  9 05:27:53 np0005551604 systemd[1]: session-17.scope: Consumed 4.303s CPU time.
Dec  9 05:27:53 np0005551604 systemd-logind[806]: Session 17 logged out. Waiting for processes to exit.
Dec  9 05:27:53 np0005551604 systemd-logind[806]: Removed session 17.
Dec  9 05:27:59 np0005551604 systemd-logind[806]: New session 18 of user zuul.
Dec  9 05:27:59 np0005551604 systemd[1]: Started Session 18 of User zuul.
Dec  9 05:28:00 np0005551604 python3.9[77164]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:28:01 np0005551604 python3.9[77320]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:28:02 np0005551604 python3.9[77404]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Dec  9 05:28:09 np0005551604 python3.9[77555]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:28:10 np0005551604 python3.9[77706]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  9 05:28:11 np0005551604 python3.9[77856]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:28:13 np0005551604 python3.9[78006]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/config follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:28:13 np0005551604 systemd[1]: session-18.scope: Deactivated successfully.
Dec  9 05:28:13 np0005551604 systemd[1]: session-18.scope: Consumed 5.953s CPU time.
Dec  9 05:28:13 np0005551604 systemd-logind[806]: Session 18 logged out. Waiting for processes to exit.
Dec  9 05:28:13 np0005551604 systemd-logind[806]: Removed session 18.
Dec  9 05:28:19 np0005551604 systemd-logind[806]: New session 19 of user zuul.
Dec  9 05:28:19 np0005551604 systemd[1]: Started Session 19 of User zuul.
Dec  9 05:28:20 np0005551604 python3.9[78184]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:28:22 np0005551604 python3.9[78340]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:22 np0005551604 python3.9[78492]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:23 np0005551604 python3.9[78644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:24 np0005551604 python3.9[78767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276102.9273956-65-129169995818331/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=75a7e9e20dc76ad58c9acf2930576cbfbda395a4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:24 np0005551604 python3.9[78919]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:25 np0005551604 python3.9[79042]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276104.3579967-65-258333822793911/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9df84e4bac3f43116987c3ebed189a3674efd35b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:26 np0005551604 python3.9[79194]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:26 np0005551604 python3.9[79317]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276105.533749-65-36776499896762/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=81ecd318380a0c62250c6dfd9bd06fa8e226946d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:27 np0005551604 python3.9[79471]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:27 np0005551604 python3.9[79623]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:28 np0005551604 python3.9[79775]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:29 np0005551604 python3.9[79898]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276107.9996395-124-59153080833839/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=5fe32fc24b7a620b8aaf126f59be1f3926a68fae backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:29 np0005551604 python3.9[80050]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:30 np0005551604 python3.9[80173]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276109.2193258-124-265071081167496/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=9df84e4bac3f43116987c3ebed189a3674efd35b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:30 np0005551604 python3.9[80325]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:31 np0005551604 python3.9[80448]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276110.442211-124-56659737235395/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=00418fc570d3edb4f3d9d3c39240c089672a1574 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:32 np0005551604 python3.9[80600]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:32 np0005551604 python3.9[80752]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:33 np0005551604 python3.9[80904]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:34 np0005551604 python3.9[81027]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276113.0904078-183-75482094781542/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=4c5e5c8af445d0967df7f1b7f4471a9274abcc23 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:35 np0005551604 python3.9[81179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:35 np0005551604 python3.9[81302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276114.5987546-183-167151409502444/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=1313c346a0698525bbe53f7c57ab00060cafd46a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:36 np0005551604 python3.9[81455]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:36 np0005551604 python3.9[81578]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276115.7257328-183-81561118347136/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=44e491aa84b4894862306ada50cb8a2277a0e8b7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:37 np0005551604 python3.9[81730]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:38 np0005551604 python3.9[81882]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:39 np0005551604 python3.9[82034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:40 np0005551604 python3.9[82157]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276118.712649-242-65731930974114/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b05fc4daa8b74b9267eb85bc6bd8920570ae2c1d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:40 np0005551604 python3.9[82309]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:41 np0005551604 python3.9[82432]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276120.1661706-242-40923561531610/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=a7839e75e6e9877663be873ed0d35c6c4602de60 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:41 np0005551604 python3.9[82584]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:42 np0005551604 python3.9[82707]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276121.3730447-242-184885627754381/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=4b46a13133a454949174e674a943f145d1adf614 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:43 np0005551604 python3.9[82859]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:43 np0005551604 chronyd[65742]: Selected source 216.197.156.83 (pool.ntp.org)
Dec  9 05:28:43 np0005551604 python3.9[83011]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:44 np0005551604 python3.9[83163]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:44 np0005551604 python3.9[83286]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276123.8987713-301-71883151834388/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=fca906f19b0b1f3bce9c5ba5c3f336719253898e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:45 np0005551604 python3.9[83438]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:46 np0005551604 python3.9[83561]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276125.0135424-301-133645921273745/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=1313c346a0698525bbe53f7c57ab00060cafd46a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:46 np0005551604 python3.9[83713]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:47 np0005551604 python3.9[83836]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276126.272177-301-9933633826078/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=91b1916769aa393e01d030ea5ed9be50a21e092f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:48 np0005551604 python3.9[83988]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:49 np0005551604 python3.9[84140]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:49 np0005551604 python3.9[84263]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276128.5555575-369-272080541908209/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:50 np0005551604 python3.9[84415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:51 np0005551604 python3.9[84567]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:51 np0005551604 python3.9[84690]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276130.909626-393-221996386453576/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:52 np0005551604 python3.9[84842]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:53 np0005551604 python3.9[84994]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:54 np0005551604 python3.9[85118]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276132.925269-417-30369901980855/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:54 np0005551604 python3.9[85270]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:55 np0005551604 python3.9[85422]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:56 np0005551604 python3.9[85545]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276135.1637125-441-94079578207693/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:57 np0005551604 python3.9[85697]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:28:58 np0005551604 python3.9[85851]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:28:58 np0005551604 python3.9[85974]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276137.5972657-465-174793431027287/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:28:59 np0005551604 python3.9[86126]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:29:00 np0005551604 python3.9[86278]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:00 np0005551604 python3.9[86401]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276139.8085477-489-202365913457828/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:01 np0005551604 python3.9[86553]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:29:02 np0005551604 python3.9[86705]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:02 np0005551604 python3.9[86828]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276141.7960346-513-7173669678923/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:03 np0005551604 python3.9[86980]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:29:04 np0005551604 python3.9[87132]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:04 np0005551604 python3.9[87255]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276143.8510988-537-2921110883999/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=0c9423b2ffdc702e705e5ef6f0f523e53f830dfe backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:05 np0005551604 systemd[1]: session-19.scope: Deactivated successfully.
Dec  9 05:29:05 np0005551604 systemd[1]: session-19.scope: Consumed 36.500s CPU time.
Dec  9 05:29:05 np0005551604 systemd-logind[806]: Session 19 logged out. Waiting for processes to exit.
Dec  9 05:29:05 np0005551604 systemd-logind[806]: Removed session 19.
Dec  9 05:29:11 np0005551604 systemd-logind[806]: New session 20 of user zuul.
Dec  9 05:29:11 np0005551604 systemd[1]: Started Session 20 of User zuul.
Dec  9 05:29:12 np0005551604 python3.9[87433]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:29:13 np0005551604 python3.9[87589]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:29:14 np0005551604 python3.9[87741]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:29:15 np0005551604 python3.9[87891]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:29:16 np0005551604 python3.9[88043]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  9 05:29:20 np0005551604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Dec  9 05:29:20 np0005551604 python3.9[88199]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:29:21 np0005551604 python3.9[88283]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:29:23 np0005551604 python3.9[88436]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  9 05:29:24 np0005551604 python3[88591]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks#012  rule:#012    proto: udp#012    dport: 4789#012- rule_name: 119 neutron geneve networks#012  rule:#012    proto: udp#012    dport: 6081#012    state: ["UNTRACKED"]#012- rule_name: 120 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: OUTPUT#012    jump: NOTRACK#012    action: append#012    state: []#012- rule_name: 121 neutron geneve networks no conntrack#012  rule:#012    proto: udp#012    dport: 6081#012    table: raw#012    chain: PREROUTING#012    jump: NOTRACK#012    action: append#012    state: []#012 dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Dec  9 05:29:25 np0005551604 python3.9[88743]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:26 np0005551604 python3.9[88895]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:26 np0005551604 python3.9[88973]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:27 np0005551604 python3.9[89125]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:27 np0005551604 python3.9[89205]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.aiq2tro9 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:28 np0005551604 python3.9[89357]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:28 np0005551604 python3.9[89435]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:29 np0005551604 python3.9[89587]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:29:30 np0005551604 python3[89740]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  9 05:29:31 np0005551604 python3.9[89892]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:31 np0005551604 python3.9[90017]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276170.5889516-157-81726110283577/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:32 np0005551604 python3.9[90169]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:33 np0005551604 python3.9[90294]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276172.1185-172-221601782413379/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:34 np0005551604 python3.9[90446]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:35 np0005551604 python3.9[90571]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276173.555542-187-160583411595189/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:35 np0005551604 python3.9[90723]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:36 np0005551604 python3.9[90848]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276175.2173684-202-135455074933278/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:37 np0005551604 python3.9[91000]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:37 np0005551604 python3.9[91125]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276176.4554794-217-210027049466732/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:38 np0005551604 python3.9[91277]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:38 np0005551604 python3.9[91429]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:29:39 np0005551604 python3.9[91584]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:40 np0005551604 python3.9[91736]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:29:41 np0005551604 python3.9[91889]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:29:41 np0005551604 python3.9[92043]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:29:42 np0005551604 python3.9[92198]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:43 np0005551604 python3.9[92348]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:29:44 np0005551604 python3.9[92501]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:0e:0a:f2:93:49:d5" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch #012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:29:44 np0005551604 ovs-vsctl[92502]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:0e:0a:f2:93:49:d5 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Dec  9 05:29:45 np0005551604 python3.9[92654]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ovs-vsctl show | grep -q "Manager"#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:29:45 np0005551604 python3.9[92809]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:29:45 np0005551604 ovs-vsctl[92810]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Dec  9 05:29:46 np0005551604 python3.9[92960]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:29:47 np0005551604 python3.9[93114]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:29:47 np0005551604 python3.9[93266]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:48 np0005551604 python3.9[93344]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:29:48 np0005551604 python3.9[93496]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:49 np0005551604 python3.9[93574]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:29:49 np0005551604 python3.9[93726]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:50 np0005551604 python3.9[93878]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:50 np0005551604 python3.9[93956]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:54 np0005551604 python3.9[94108]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:54 np0005551604 python3.9[94186]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:55 np0005551604 python3.9[94338]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:29:55 np0005551604 systemd[1]: Reloading.
Dec  9 05:29:55 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:29:55 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:29:56 np0005551604 python3.9[94528]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:56 np0005551604 python3.9[94606]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:57 np0005551604 python3.9[94758]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:29:57 np0005551604 python3.9[94838]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:29:58 np0005551604 python3.9[94990]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:29:58 np0005551604 systemd[1]: Reloading.
Dec  9 05:29:58 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:29:58 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:29:58 np0005551604 systemd[1]: Starting Create netns directory...
Dec  9 05:29:58 np0005551604 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  9 05:29:58 np0005551604 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  9 05:29:58 np0005551604 systemd[1]: Finished Create netns directory.
Dec  9 05:29:59 np0005551604 python3.9[95184]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:00 np0005551604 python3.9[95336]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:01 np0005551604 python3.9[95459]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276199.9331644-468-258861547626813/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:02 np0005551604 python3.9[95613]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:02 np0005551604 python3.9[95765]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:03 np0005551604 python3.9[95888]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276202.1966534-493-41773020000150/.source.json _original_basename=.l1fj4oga follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:30:03 np0005551604 python3.9[96040]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:30:06 np0005551604 python3.9[96467]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Dec  9 05:30:07 np0005551604 python3.9[96619]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  9 05:30:08 np0005551604 python3.9[96771]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  9 05:30:08 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:30:09 np0005551604 python3[96934]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  9 05:30:09 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:30:09 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:30:09 np0005551604 podman[96970]: 2025-12-09 10:30:09.510480093 +0000 UTC m=+0.019761720 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  9 05:30:10 np0005551604 podman[96970]: 2025-12-09 10:30:10.999655491 +0000 UTC m=+1.508937128 container create e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  9 05:30:11 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:30:11 np0005551604 python3[96934]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Dec  9 05:30:11 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:30:11 np0005551604 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Dec  9 05:30:11 np0005551604 python3.9[97156]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:30:12 np0005551604 python3.9[97310]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:30:12 np0005551604 python3.9[97386]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:30:13 np0005551604 python3.9[97537]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276213.0607538-581-8441588140945/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:30:14 np0005551604 python3.9[97613]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:30:14 np0005551604 systemd[1]: Reloading.
Dec  9 05:30:14 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:30:14 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:30:15 np0005551604 python3.9[97725]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:30:15 np0005551604 systemd[1]: Reloading.
Dec  9 05:30:15 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:30:15 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:30:15 np0005551604 systemd[1]: Starting ovn_controller container...
Dec  9 05:30:16 np0005551604 systemd[1]: Created slice Virtual Machine and Container Slice.
Dec  9 05:30:16 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:30:16 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/657d7a39cf9cd41507dcd7760d5ebf320949ccaec507605954f660be41deb58c/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  9 05:30:16 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.
Dec  9 05:30:17 np0005551604 podman[97765]: 2025-12-09 10:30:17.163882455 +0000 UTC m=+1.443466823 container init e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller)
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: + sudo -E kolla_set_configs
Dec  9 05:30:17 np0005551604 podman[97765]: 2025-12-09 10:30:17.193902751 +0000 UTC m=+1.473487089 container start e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 05:30:17 np0005551604 systemd[1]: Created slice User Slice of UID 0.
Dec  9 05:30:17 np0005551604 systemd[1]: Starting User Runtime Directory /run/user/0...
Dec  9 05:30:17 np0005551604 systemd[1]: Finished User Runtime Directory /run/user/0.
Dec  9 05:30:17 np0005551604 systemd[1]: Starting User Manager for UID 0...
Dec  9 05:30:17 np0005551604 systemd[97798]: Queued start job for default target Main User Target.
Dec  9 05:30:17 np0005551604 systemd[97798]: Created slice User Application Slice.
Dec  9 05:30:17 np0005551604 systemd[97798]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Dec  9 05:30:17 np0005551604 systemd[97798]: Started Daily Cleanup of User's Temporary Directories.
Dec  9 05:30:17 np0005551604 systemd[97798]: Reached target Paths.
Dec  9 05:30:17 np0005551604 systemd[97798]: Reached target Timers.
Dec  9 05:30:17 np0005551604 systemd[97798]: Starting D-Bus User Message Bus Socket...
Dec  9 05:30:17 np0005551604 systemd[97798]: Starting Create User's Volatile Files and Directories...
Dec  9 05:30:17 np0005551604 systemd[97798]: Listening on D-Bus User Message Bus Socket.
Dec  9 05:30:17 np0005551604 systemd[97798]: Reached target Sockets.
Dec  9 05:30:17 np0005551604 systemd[97798]: Finished Create User's Volatile Files and Directories.
Dec  9 05:30:17 np0005551604 systemd[97798]: Reached target Basic System.
Dec  9 05:30:17 np0005551604 systemd[97798]: Reached target Main User Target.
Dec  9 05:30:17 np0005551604 systemd[97798]: Startup finished in 151ms.
Dec  9 05:30:17 np0005551604 systemd[1]: Started User Manager for UID 0.
Dec  9 05:30:17 np0005551604 systemd[1]: Started Session c1 of User root.
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: INFO:__main__:Validating config file
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: INFO:__main__:Writing out command to execute
Dec  9 05:30:17 np0005551604 systemd[1]: session-c1.scope: Deactivated successfully.
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: ++ cat /run_command
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: + ARGS=
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: + sudo kolla_copy_cacerts
Dec  9 05:30:17 np0005551604 systemd[1]: Started Session c2 of User root.
Dec  9 05:30:17 np0005551604 systemd[1]: session-c2.scope: Deactivated successfully.
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: + [[ ! -n '' ]]
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: + . kolla_extend_start
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: + umask 0022
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Dec  9 05:30:17 np0005551604 NetworkManager[56302]: <info>  [1765276217.7326] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Dec  9 05:30:17 np0005551604 NetworkManager[56302]: <info>  [1765276217.7335] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 05:30:17 np0005551604 NetworkManager[56302]: <warn>  [1765276217.7339] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec  9 05:30:17 np0005551604 NetworkManager[56302]: <info>  [1765276217.7349] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Dec  9 05:30:17 np0005551604 NetworkManager[56302]: <info>  [1765276217.7358] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Dec  9 05:30:17 np0005551604 NetworkManager[56302]: <info>  [1765276217.7363] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  9 05:30:17 np0005551604 kernel: br-int: entered promiscuous mode
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00010|features|INFO|OVS Feature: ct_zero_snat, state: supported
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00011|features|INFO|OVS Feature: ct_flush, state: supported
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00012|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00013|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00014|main|INFO|OVS feature set changed, force recompute.
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00015|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00017|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00018|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00019|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00020|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00021|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00022|main|INFO|OVS feature set changed, force recompute.
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00023|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00024|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  9 05:30:17 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:17Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Dec  9 05:30:17 np0005551604 NetworkManager[56302]: <info>  [1765276217.7721] manager: (ovn-54258f-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Dec  9 05:30:17 np0005551604 systemd-udevd[97826]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 05:30:17 np0005551604 kernel: genev_sys_6081: entered promiscuous mode
Dec  9 05:30:17 np0005551604 systemd-udevd[97829]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 05:30:17 np0005551604 NetworkManager[56302]: <info>  [1765276217.8063] device (genev_sys_6081): carrier: link connected
Dec  9 05:30:17 np0005551604 NetworkManager[56302]: <info>  [1765276217.8069] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Dec  9 05:30:17 np0005551604 edpm-start-podman-container[97765]: ovn_controller
Dec  9 05:30:17 np0005551604 edpm-start-podman-container[97764]: Creating additional drop-in dependency for "ovn_controller" (e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6)
Dec  9 05:30:17 np0005551604 systemd[1]: Reloading.
Dec  9 05:30:18 np0005551604 podman[97786]: 2025-12-09 10:30:18.050211848 +0000 UTC m=+0.842407068 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible)
Dec  9 05:30:18 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:30:18 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:30:18 np0005551604 systemd[1]: Started ovn_controller container.
Dec  9 05:30:19 np0005551604 python3.9[98054]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:30:19 np0005551604 ovs-vsctl[98055]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Dec  9 05:30:19 np0005551604 python3.9[98207]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:30:20 np0005551604 ovs-vsctl[98209]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Dec  9 05:30:20 np0005551604 python3.9[98362]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:30:20 np0005551604 ovs-vsctl[98363]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Dec  9 05:30:21 np0005551604 systemd[1]: session-20.scope: Deactivated successfully.
Dec  9 05:30:21 np0005551604 systemd[1]: session-20.scope: Consumed 45.753s CPU time.
Dec  9 05:30:21 np0005551604 systemd-logind[806]: Session 20 logged out. Waiting for processes to exit.
Dec  9 05:30:21 np0005551604 systemd-logind[806]: Removed session 20.
Dec  9 05:30:26 np0005551604 systemd-logind[806]: New session 22 of user zuul.
Dec  9 05:30:26 np0005551604 systemd[1]: Started Session 22 of User zuul.
Dec  9 05:30:27 np0005551604 systemd[1]: Stopping User Manager for UID 0...
Dec  9 05:30:27 np0005551604 systemd[97798]: Activating special unit Exit the Session...
Dec  9 05:30:27 np0005551604 systemd[97798]: Stopped target Main User Target.
Dec  9 05:30:27 np0005551604 systemd[97798]: Stopped target Basic System.
Dec  9 05:30:27 np0005551604 systemd[97798]: Stopped target Paths.
Dec  9 05:30:27 np0005551604 systemd[97798]: Stopped target Sockets.
Dec  9 05:30:27 np0005551604 systemd[97798]: Stopped target Timers.
Dec  9 05:30:27 np0005551604 systemd[97798]: Stopped Daily Cleanup of User's Temporary Directories.
Dec  9 05:30:27 np0005551604 systemd[97798]: Closed D-Bus User Message Bus Socket.
Dec  9 05:30:27 np0005551604 systemd[97798]: Stopped Create User's Volatile Files and Directories.
Dec  9 05:30:27 np0005551604 systemd[97798]: Removed slice User Application Slice.
Dec  9 05:30:27 np0005551604 systemd[97798]: Reached target Shutdown.
Dec  9 05:30:27 np0005551604 systemd[97798]: Finished Exit the Session.
Dec  9 05:30:27 np0005551604 systemd[97798]: Reached target Exit the Session.
Dec  9 05:30:27 np0005551604 systemd[1]: user@0.service: Deactivated successfully.
Dec  9 05:30:27 np0005551604 systemd[1]: Stopped User Manager for UID 0.
Dec  9 05:30:27 np0005551604 systemd[1]: Stopping User Runtime Directory /run/user/0...
Dec  9 05:30:27 np0005551604 systemd[1]: run-user-0.mount: Deactivated successfully.
Dec  9 05:30:27 np0005551604 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Dec  9 05:30:27 np0005551604 systemd[1]: Stopped User Runtime Directory /run/user/0.
Dec  9 05:30:27 np0005551604 systemd[1]: Removed slice User Slice of UID 0.
Dec  9 05:30:28 np0005551604 python3.9[98544]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:30:29 np0005551604 python3.9[98702]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:30 np0005551604 python3.9[98854]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:30 np0005551604 python3.9[99006]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:31 np0005551604 python3.9[99158]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:32 np0005551604 python3.9[99310]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:32 np0005551604 python3.9[99460]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:30:33 np0005551604 python3.9[99612]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Dec  9 05:30:35 np0005551604 python3.9[99762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:35 np0005551604 python3.9[99883]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276234.4913602-86-229279897171716/.source follow=False _original_basename=haproxy.j2 checksum=95c62e64c8f82dd9393a560d1b052dc98d38f810 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:36 np0005551604 python3.9[100034]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:37 np0005551604 python3.9[100155]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276236.0862122-101-252001484919297/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:38 np0005551604 python3.9[100307]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:30:39 np0005551604 python3.9[100391]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:30:41 np0005551604 python3.9[100544]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  9 05:30:42 np0005551604 python3.9[100697]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:43 np0005551604 python3.9[100818]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276241.821226-138-230799949340148/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:43 np0005551604 python3.9[100968]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:44 np0005551604 python3.9[101089]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276243.4243717-138-260805423351174/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:45 np0005551604 python3.9[101239]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:46 np0005551604 python3.9[101360]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276245.1735392-182-275146435253262/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:46 np0005551604 python3.9[101510]: ansible-ansible.legacy.stat Invoked with path=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:47 np0005551604 python3.9[101631]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276246.4171643-182-33605177935004/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:48 np0005551604 python3.9[101781]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:30:48 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:48Z|00025|memory|INFO|16128 kB peak resident set size after 31.0 seconds
Dec  9 05:30:48 np0005551604 ovn_controller[97780]: 2025-12-09T10:30:48Z|00026|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Dec  9 05:30:48 np0005551604 podman[101907]: 2025-12-09 10:30:48.70987903 +0000 UTC m=+0.120667817 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  9 05:30:48 np0005551604 python3.9[101946]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:49 np0005551604 python3.9[102110]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:49 np0005551604 python3.9[102188]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:50 np0005551604 python3.9[102340]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:51 np0005551604 python3.9[102418]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:30:51 np0005551604 python3.9[102570]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:30:52 np0005551604 python3.9[102722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:52 np0005551604 python3.9[102800]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:30:53 np0005551604 python3.9[102952]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:54 np0005551604 python3.9[103030]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:30:54 np0005551604 python3.9[103182]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:30:54 np0005551604 systemd[1]: Reloading.
Dec  9 05:30:54 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:30:54 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:30:56 np0005551604 python3.9[103371]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:56 np0005551604 python3.9[103449]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:30:57 np0005551604 python3.9[103601]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:30:58 np0005551604 python3.9[103679]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:30:58 np0005551604 python3.9[103833]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:30:58 np0005551604 systemd[1]: Reloading.
Dec  9 05:30:58 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:30:58 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:30:59 np0005551604 systemd[1]: Starting Create netns directory...
Dec  9 05:30:59 np0005551604 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  9 05:30:59 np0005551604 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  9 05:30:59 np0005551604 systemd[1]: Finished Create netns directory.
Dec  9 05:31:00 np0005551604 python3.9[104027]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:31:00 np0005551604 python3.9[104179]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:31:01 np0005551604 python3.9[104302]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276260.2765355-333-100835462900902/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:31:02 np0005551604 python3.9[104454]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:31:03 np0005551604 python3.9[104606]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:31:03 np0005551604 python3.9[104729]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276262.4765635-358-114433721574861/.source.json _original_basename=.550l_lr4 follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:04 np0005551604 python3.9[104881]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:06 np0005551604 python3.9[105308]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Dec  9 05:31:07 np0005551604 python3.9[105460]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  9 05:31:08 np0005551604 python3.9[105612]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  9 05:31:10 np0005551604 python3[105790]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  9 05:31:10 np0005551604 podman[105826]: 2025-12-09 10:31:10.320590891 +0000 UTC m=+0.057880659 container create 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  9 05:31:10 np0005551604 podman[105826]: 2025-12-09 10:31:10.288094754 +0000 UTC m=+0.025384502 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  9 05:31:10 np0005551604 python3[105790]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  9 05:31:11 np0005551604 python3.9[106015]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:31:11 np0005551604 python3.9[106169]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:12 np0005551604 python3.9[106245]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:31:13 np0005551604 python3.9[106396]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276272.496397-446-118819710670613/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:13 np0005551604 python3.9[106472]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:31:13 np0005551604 systemd[1]: Reloading.
Dec  9 05:31:13 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:31:13 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:31:14 np0005551604 python3.9[106583]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:31:14 np0005551604 systemd[1]: Reloading.
Dec  9 05:31:14 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:31:14 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:31:14 np0005551604 systemd[1]: Starting ovn_metadata_agent container...
Dec  9 05:31:15 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:31:15 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d2449839733707e5a9b3894384d1e187573cc3f5bda89bccbba26ed260b5da/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Dec  9 05:31:15 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e5d2449839733707e5a9b3894384d1e187573cc3f5bda89bccbba26ed260b5da/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  9 05:31:15 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.
Dec  9 05:31:15 np0005551604 podman[106624]: 2025-12-09 10:31:15.059281268 +0000 UTC m=+0.130248355 container init 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: + sudo -E kolla_set_configs
Dec  9 05:31:15 np0005551604 podman[106624]: 2025-12-09 10:31:15.090710243 +0000 UTC m=+0.161677280 container start 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  9 05:31:15 np0005551604 edpm-start-podman-container[106624]: ovn_metadata_agent
Dec  9 05:31:15 np0005551604 edpm-start-podman-container[106623]: Creating additional drop-in dependency for "ovn_metadata_agent" (8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403)
Dec  9 05:31:15 np0005551604 podman[106646]: 2025-12-09 10:31:15.155417388 +0000 UTC m=+0.050324011 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  9 05:31:15 np0005551604 systemd[1]: Reloading.
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Validating config file
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Copying service configuration files
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Writing out command to execute
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/external
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: ++ cat /run_command
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: + CMD=neutron-ovn-metadata-agent
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: + ARGS=
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: + sudo kolla_copy_cacerts
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: + [[ ! -n '' ]]
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: + . kolla_extend_start
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: Running command: 'neutron-ovn-metadata-agent'
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: + umask 0022
Dec  9 05:31:15 np0005551604 ovn_metadata_agent[106639]: + exec neutron-ovn-metadata-agent
Dec  9 05:31:15 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:31:15 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:31:15 np0005551604 systemd[1]: Started ovn_metadata_agent container.
Dec  9 05:31:16 np0005551604 systemd[1]: session-22.scope: Deactivated successfully.
Dec  9 05:31:16 np0005551604 systemd[1]: session-22.scope: Consumed 36.257s CPU time.
Dec  9 05:31:16 np0005551604 systemd-logind[806]: Session 22 logged out. Waiting for processes to exit.
Dec  9 05:31:16 np0005551604 systemd-logind[806]: Removed session 22.
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.911 106644 INFO neutron.common.config [-] Logging enabled!#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.912 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.913 106644 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.914 106644 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.915 106644 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.916 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.917 106644 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.918 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.919 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.920 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.921 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.922 106644 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.923 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.924 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.925 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.926 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.927 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.928 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.929 106644 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.930 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.931 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.932 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.933 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.934 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.935 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.936 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.937 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.938 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.939 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.940 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.941 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.942 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.943 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.944 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.945 106644 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.945 106644 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.954 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.954 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.954 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.955 106644 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.955 106644 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.969 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name 9ec27861-bbe8-48fb-b30f-25b967e1609e (UUID: 9ec27861-bbe8-48fb-b30f-25b967e1609e) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.996 106644 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.996 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.996 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  9 05:31:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.997 106644 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:16.999 106644 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.005 106644 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.010 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', '9ec27861-bbe8-48fb-b30f-25b967e1609e'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], external_ids={}, name=9ec27861-bbe8-48fb-b30f-25b967e1609e, nb_cfg_timestamp=1765276225763, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.011 106644 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7fa01184a0d0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.013 106644 INFO oslo_service.service [-] Starting 1 workers#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.016 106644 DEBUG oslo_service.service [-] Started child 106752 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.020 106644 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmplxn4eun2/privsep.sock']#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.022 106752 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-167717'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.051 106752 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.051 106752 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.051 106752 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.055 106752 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.062 106752 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.068 106752 INFO eventlet.wsgi.server [-] (106752) wsgi starting up on http:/var/lib/neutron/metadata_proxy#033[00m
Dec  9 05:31:17 np0005551604 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.752 106644 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.753 106644 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmplxn4eun2/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.577 106757 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.583 106757 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.585 106757 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.586 106757 INFO oslo.privsep.daemon [-] privsep daemon running as pid 106757#033[00m
Dec  9 05:31:17 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:17.756 106757 DEBUG oslo.privsep.daemon [-] privsep: reply[9b57e096-cee7-4896-93fe-8cfb0a377c52]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 05:31:18 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.238 106757 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:31:18 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.238 106757 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:31:18 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.238 106757 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:31:18 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.773 106757 DEBUG oslo.privsep.daemon [-] privsep: reply[ec9c2ce1-450d-44a9-a7ba-95e64bd97ad5]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 05:31:18 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.776 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, column=external_ids, values=({'neutron:ovn-metadata-id': 'bdbab969-a13d-5bc4-9a5f-1e6f9a29c628'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 05:31:18 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:18.929 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 05:31:18 np0005551604 podman[106762]: 2025-12-09 10:31:18.962386016 +0000 UTC m=+0.110209526 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller)
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.191 106644 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.191 106644 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.191 106644 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.191 106644 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.191 106644 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.192 106644 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.192 106644 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.192 106644 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.192 106644 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.192 106644 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.193 106644 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.193 106644 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.193 106644 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.193 106644 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.194 106644 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.194 106644 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.194 106644 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.194 106644 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.195 106644 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.196 106644 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.196 106644 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.196 106644 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.196 106644 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.197 106644 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.197 106644 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.197 106644 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.197 106644 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.197 106644 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.198 106644 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.198 106644 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.198 106644 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.198 106644 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.199 106644 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.200 106644 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.201 106644 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.202 106644 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.202 106644 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.202 106644 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.202 106644 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.202 106644 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.203 106644 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.204 106644 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.205 106644 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.206 106644 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.207 106644 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.208 106644 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.209 106644 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.209 106644 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.209 106644 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.209 106644 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.209 106644 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.210 106644 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.210 106644 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.210 106644 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.210 106644 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.210 106644 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.211 106644 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.212 106644 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.213 106644 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.214 106644 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.215 106644 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.216 106644 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.216 106644 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.216 106644 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.216 106644 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.216 106644 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.217 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.218 106644 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.219 106644 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.220 106644 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.221 106644 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.221 106644 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.221 106644 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.221 106644 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.221 106644 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.222 106644 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.223 106644 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.223 106644 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.223 106644 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.223 106644 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.223 106644 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.224 106644 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.224 106644 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.224 106644 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.224 106644 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.225 106644 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.226 106644 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.227 106644 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.228 106644 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.229 106644 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.229 106644 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.229 106644 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.229 106644 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.229 106644 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.230 106644 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.231 106644 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.232 106644 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.233 106644 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.234 106644 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.235 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.236 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.237 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.238 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:31:19 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:31:19.239 106644 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  9 05:31:21 np0005551604 systemd-logind[806]: New session 23 of user zuul.
Dec  9 05:31:21 np0005551604 systemd[1]: Started Session 23 of User zuul.
Dec  9 05:31:22 np0005551604 python3.9[106942]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:31:24 np0005551604 python3.9[107098]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:31:25 np0005551604 python3.9[107263]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:31:25 np0005551604 systemd[1]: Reloading.
Dec  9 05:31:25 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:31:25 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:31:26 np0005551604 python3.9[107447]: ansible-ansible.builtin.service_facts Invoked
Dec  9 05:31:27 np0005551604 network[107464]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  9 05:31:27 np0005551604 network[107465]: 'network-scripts' will be removed from distribution in near future.
Dec  9 05:31:27 np0005551604 network[107466]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  9 05:31:30 np0005551604 python3.9[107729]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:31:31 np0005551604 python3.9[107882]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:31:33 np0005551604 python3.9[108035]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:31:34 np0005551604 python3.9[108188]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:31:35 np0005551604 python3.9[108341]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:31:36 np0005551604 python3.9[108494]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:31:36 np0005551604 python3.9[108647]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:31:38 np0005551604 python3.9[108800]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:39 np0005551604 python3.9[108952]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:40 np0005551604 python3.9[109104]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:40 np0005551604 python3.9[109256]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:41 np0005551604 python3.9[109408]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:41 np0005551604 python3.9[109560]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:42 np0005551604 python3.9[109712]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:43 np0005551604 python3.9[109864]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:43 np0005551604 python3.9[110016]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:44 np0005551604 python3.9[110168]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:45 np0005551604 python3.9[110320]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:45 np0005551604 podman[110444]: 2025-12-09 10:31:45.765030632 +0000 UTC m=+0.082174050 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  9 05:31:45 np0005551604 python3.9[110487]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:46 np0005551604 python3.9[110642]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:47 np0005551604 python3.9[110794]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:31:48 np0005551604 python3.9[110946]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:31:49 np0005551604 python3.9[111098]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  9 05:31:49 np0005551604 podman[111215]: 2025-12-09 10:31:49.930418537 +0000 UTC m=+0.085668489 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  9 05:31:50 np0005551604 python3.9[111275]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:31:50 np0005551604 systemd[1]: Reloading.
Dec  9 05:31:50 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:31:50 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:31:51 np0005551604 python3.9[111464]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:31:51 np0005551604 python3.9[111617]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:31:52 np0005551604 python3.9[111770]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:31:53 np0005551604 python3.9[111923]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:31:54 np0005551604 python3.9[112076]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:31:54 np0005551604 python3.9[112229]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:31:55 np0005551604 python3.9[112382]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:31:56 np0005551604 python3.9[112535]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Dec  9 05:31:57 np0005551604 python3.9[112688]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  9 05:31:58 np0005551604 python3.9[112846]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  9 05:32:00 np0005551604 python3.9[113006]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:32:01 np0005551604 python3.9[113092]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:32:15 np0005551604 podman[113103]: 2025-12-09 10:32:15.930631118 +0000 UTC m=+0.086637618 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:32:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:32:16.957 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:32:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:32:16.958 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:32:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:32:16.959 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:32:21 np0005551604 podman[113224]: 2025-12-09 10:32:21.020190243 +0000 UTC m=+0.157215255 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 05:32:41 np0005551604 kernel: SELinux:  Converting 2757 SID table entries...
Dec  9 05:32:41 np0005551604 kernel: SELinux:  policy capability network_peer_controls=1
Dec  9 05:32:41 np0005551604 kernel: SELinux:  policy capability open_perms=1
Dec  9 05:32:41 np0005551604 kernel: SELinux:  policy capability extended_socket_class=1
Dec  9 05:32:41 np0005551604 kernel: SELinux:  policy capability always_check_network=0
Dec  9 05:32:41 np0005551604 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  9 05:32:41 np0005551604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  9 05:32:41 np0005551604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  9 05:32:46 np0005551604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Dec  9 05:32:47 np0005551604 podman[113340]: 2025-12-09 10:32:47.379790329 +0000 UTC m=+0.503219467 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:32:52 np0005551604 kernel: SELinux:  Converting 2757 SID table entries...
Dec  9 05:32:52 np0005551604 kernel: SELinux:  policy capability network_peer_controls=1
Dec  9 05:32:52 np0005551604 kernel: SELinux:  policy capability open_perms=1
Dec  9 05:32:52 np0005551604 kernel: SELinux:  policy capability extended_socket_class=1
Dec  9 05:32:52 np0005551604 kernel: SELinux:  policy capability always_check_network=0
Dec  9 05:32:52 np0005551604 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  9 05:32:52 np0005551604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  9 05:32:52 np0005551604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  9 05:32:52 np0005551604 podman[113364]: 2025-12-09 10:32:52.291685164 +0000 UTC m=+0.084647858 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  9 05:33:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:33:16.958 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:33:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:33:16.960 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:33:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:33:16.960 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:33:17 np0005551604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Dec  9 05:33:17 np0005551604 podman[121221]: 2025-12-09 10:33:17.921259224 +0000 UTC m=+0.062432152 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, tcib_managed=true)
Dec  9 05:33:22 np0005551604 podman[124403]: 2025-12-09 10:33:22.959636303 +0000 UTC m=+0.122358261 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  9 05:33:49 np0005551604 podman[130249]: 2025-12-09 10:33:49.01845792 +0000 UTC m=+0.155113640 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec  9 05:33:54 np0005551604 podman[130272]: 2025-12-09 10:33:54.05081029 +0000 UTC m=+0.180984235 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  9 05:33:55 np0005551604 kernel: SELinux:  Converting 2758 SID table entries...
Dec  9 05:33:55 np0005551604 kernel: SELinux:  policy capability network_peer_controls=1
Dec  9 05:33:55 np0005551604 kernel: SELinux:  policy capability open_perms=1
Dec  9 05:33:55 np0005551604 kernel: SELinux:  policy capability extended_socket_class=1
Dec  9 05:33:55 np0005551604 kernel: SELinux:  policy capability always_check_network=0
Dec  9 05:33:55 np0005551604 kernel: SELinux:  policy capability cgroup_seclabel=1
Dec  9 05:33:55 np0005551604 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Dec  9 05:33:55 np0005551604 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Dec  9 05:33:57 np0005551604 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec  9 05:33:57 np0005551604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=14 res=1
Dec  9 05:33:57 np0005551604 dbus-broker-launch[768]: Noticed file-system modification, trigger reload.
Dec  9 05:34:06 np0005551604 systemd[1]: Stopping OpenSSH server daemon...
Dec  9 05:34:06 np0005551604 systemd[1]: sshd.service: Deactivated successfully.
Dec  9 05:34:06 np0005551604 systemd[1]: Stopped OpenSSH server daemon.
Dec  9 05:34:06 np0005551604 systemd[1]: sshd.service: Consumed 2.403s CPU time, read 32.0K from disk, written 16.0K to disk.
Dec  9 05:34:06 np0005551604 systemd[1]: Stopped target sshd-keygen.target.
Dec  9 05:34:06 np0005551604 systemd[1]: Stopping sshd-keygen.target...
Dec  9 05:34:06 np0005551604 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  9 05:34:06 np0005551604 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  9 05:34:06 np0005551604 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Dec  9 05:34:06 np0005551604 systemd[1]: Reached target sshd-keygen.target.
Dec  9 05:34:06 np0005551604 systemd[1]: Starting OpenSSH server daemon...
Dec  9 05:34:06 np0005551604 systemd[1]: Started OpenSSH server daemon.
Dec  9 05:34:09 np0005551604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  9 05:34:09 np0005551604 systemd[1]: Starting man-db-cache-update.service...
Dec  9 05:34:09 np0005551604 systemd[1]: Reloading.
Dec  9 05:34:09 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:34:09 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:34:09 np0005551604 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  9 05:34:15 np0005551604 python3.9[136710]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  9 05:34:15 np0005551604 systemd[1]: Reloading.
Dec  9 05:34:15 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:34:15 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:34:16 np0005551604 python3.9[138006]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  9 05:34:16 np0005551604 systemd[1]: Reloading.
Dec  9 05:34:16 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:34:16 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:34:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:34:16.960 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:34:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:34:16.961 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:34:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:34:16.962 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:34:17 np0005551604 python3.9[139370]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  9 05:34:17 np0005551604 systemd[1]: Reloading.
Dec  9 05:34:17 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:34:17 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:34:17 np0005551604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  9 05:34:17 np0005551604 systemd[1]: Finished man-db-cache-update.service.
Dec  9 05:34:17 np0005551604 systemd[1]: man-db-cache-update.service: Consumed 10.999s CPU time.
Dec  9 05:34:17 np0005551604 systemd[1]: run-r0aaadbf831054fe4b97610d26b19ad4b.service: Deactivated successfully.
Dec  9 05:34:18 np0005551604 python3.9[140422]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  9 05:34:18 np0005551604 systemd[1]: Reloading.
Dec  9 05:34:18 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:34:18 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:34:19 np0005551604 podman[140586]: 2025-12-09 10:34:19.153876554 +0000 UTC m=+0.086469171 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  9 05:34:19 np0005551604 python3.9[140626]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:19 np0005551604 systemd[1]: Reloading.
Dec  9 05:34:19 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:34:19 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:34:20 np0005551604 python3.9[140822]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:20 np0005551604 systemd[1]: Reloading.
Dec  9 05:34:20 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:34:20 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:34:22 np0005551604 python3.9[141012]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:22 np0005551604 systemd[1]: Reloading.
Dec  9 05:34:22 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:34:22 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:34:23 np0005551604 python3.9[141202]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:25 np0005551604 podman[141358]: 2025-12-09 10:34:25.003574795 +0000 UTC m=+0.152777725 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  9 05:34:25 np0005551604 python3.9[141357]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:26 np0005551604 systemd[1]: Reloading.
Dec  9 05:34:26 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:34:26 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:34:27 np0005551604 python3.9[141574]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Dec  9 05:34:27 np0005551604 systemd[1]: Reloading.
Dec  9 05:34:28 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:34:28 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:34:28 np0005551604 systemd[1]: Listening on libvirt proxy daemon socket.
Dec  9 05:34:28 np0005551604 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Dec  9 05:34:29 np0005551604 python3.9[141767]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:30 np0005551604 python3.9[141924]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:31 np0005551604 python3.9[142081]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:32 np0005551604 python3.9[142236]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:32 np0005551604 python3.9[142391]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:34 np0005551604 python3.9[142546]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:34 np0005551604 python3.9[142701]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:35 np0005551604 python3.9[142856]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:38 np0005551604 python3.9[143011]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:40 np0005551604 python3.9[143166]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:41 np0005551604 python3.9[143321]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:42 np0005551604 python3.9[143476]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:43 np0005551604 python3.9[143631]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:44 np0005551604 python3.9[143786]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Dec  9 05:34:45 np0005551604 python3.9[143941]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:34:46 np0005551604 python3.9[144093]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:34:46 np0005551604 python3.9[144245]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:34:47 np0005551604 python3.9[144397]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:34:48 np0005551604 python3.9[144549]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:34:49 np0005551604 podman[144701]: 2025-12-09 10:34:49.301735399 +0000 UTC m=+0.092593806 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:34:49 np0005551604 python3.9[144702]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:34:50 np0005551604 python3.9[144872]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:34:51 np0005551604 python3.9[144997]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276489.5686135-554-28059122970467/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:34:52 np0005551604 python3.9[145149]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:34:52 np0005551604 python3.9[145274]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276491.6373978-554-11644743782398/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:34:53 np0005551604 python3.9[145426]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:34:54 np0005551604 python3.9[145551]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276492.919412-554-275258562983214/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:34:54 np0005551604 python3.9[145703]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:34:55 np0005551604 podman[145800]: 2025-12-09 10:34:55.135189939 +0000 UTC m=+0.079568334 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec  9 05:34:55 np0005551604 python3.9[145842]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276494.1842458-554-74339909102518/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:34:55 np0005551604 python3.9[146004]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:34:56 np0005551604 python3.9[146129]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276495.45127-554-72442478740385/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:34:57 np0005551604 python3.9[146281]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:34:57 np0005551604 python3.9[146406]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276496.7240787-554-190457117694357/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:34:58 np0005551604 python3.9[146558]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:34:59 np0005551604 python3.9[146683]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276498.1114683-554-115404189868431/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:34:59 np0005551604 python3.9[146835]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:00 np0005551604 python3.9[146960]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1765276499.3246698-554-170719578275275/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:01 np0005551604 python3.9[147112]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Dec  9 05:35:01 np0005551604 python3.9[147265]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:02 np0005551604 python3.9[147417]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:03 np0005551604 python3.9[147569]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:04 np0005551604 python3.9[147721]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:04 np0005551604 python3.9[147873]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:05 np0005551604 python3.9[148025]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:06 np0005551604 python3.9[148177]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:06 np0005551604 python3.9[148329]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:07 np0005551604 python3.9[148481]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:08 np0005551604 python3.9[148633]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:08 np0005551604 python3.9[148785]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:09 np0005551604 python3.9[148937]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:10 np0005551604 python3.9[149089]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:10 np0005551604 python3.9[149241]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:11 np0005551604 python3.9[149393]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:12 np0005551604 python3.9[149516]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276511.0403106-775-51596286416993/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:12 np0005551604 python3.9[149668]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:13 np0005551604 python3.9[149791]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276512.3722663-775-166255459610076/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:14 np0005551604 python3.9[149943]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:14 np0005551604 python3.9[150066]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276513.6233308-775-33387460043987/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:15 np0005551604 python3.9[150218]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:15 np0005551604 python3.9[150341]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276514.9064238-775-206647001950809/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:16 np0005551604 python3.9[150493]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:35:16.961 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:35:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:35:16.963 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:35:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:35:16.963 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:35:17 np0005551604 python3.9[150616]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276516.1608899-775-36435593586137/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:18 np0005551604 python3.9[150768]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:18 np0005551604 python3.9[150891]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276517.478315-775-78102357931910/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:19 np0005551604 python3.9[151043]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:19 np0005551604 podman[151138]: 2025-12-09 10:35:19.872059892 +0000 UTC m=+0.099713188 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  9 05:35:20 np0005551604 python3.9[151177]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276518.7774246-775-59977128409655/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:20 np0005551604 python3.9[151338]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:21 np0005551604 python3.9[151461]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276520.2118473-775-243014481057201/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:21 np0005551604 python3.9[151613]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:22 np0005551604 python3.9[151736]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276521.4038959-775-194393450327516/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:23 np0005551604 python3.9[151888]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:23 np0005551604 python3.9[152011]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276522.6019351-775-16254896172554/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:24 np0005551604 python3.9[152163]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:25 np0005551604 python3.9[152286]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276523.9281914-775-234042580023202/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:25 np0005551604 podman[152410]: 2025-12-09 10:35:25.710699969 +0000 UTC m=+0.174730018 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec  9 05:35:25 np0005551604 python3.9[152452]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:26 np0005551604 python3.9[152587]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276525.2521908-775-57288425042167/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:27 np0005551604 python3.9[152739]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:27 np0005551604 python3.9[152862]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276526.675867-775-95280258718068/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:28 np0005551604 python3.9[153016]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:29 np0005551604 python3.9[153139]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276528.004999-775-274649665644882/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:29 np0005551604 python3.9[153289]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail#012ls -lRZ /run/libvirt | grep -E ':container_\S+_t'#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:35:30 np0005551604 python3.9[153444]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Dec  9 05:35:32 np0005551604 dbus-broker-launch[772]: avc:  op=load_policy lsm=selinux seqno=15 res=1
Dec  9 05:35:32 np0005551604 python3.9[153600]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:32 np0005551604 python3.9[153752]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:33 np0005551604 python3.9[153904]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:34 np0005551604 python3.9[154056]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:34 np0005551604 python3.9[154208]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:35 np0005551604 python3.9[154360]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:36 np0005551604 python3.9[154512]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:36 np0005551604 python3.9[154664]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:37 np0005551604 python3.9[154816]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:38 np0005551604 python3.9[154968]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:39 np0005551604 python3.9[155120]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:35:39 np0005551604 systemd[1]: Reloading.
Dec  9 05:35:39 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:35:39 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:35:39 np0005551604 systemd[1]: Starting libvirt logging daemon socket...
Dec  9 05:35:39 np0005551604 systemd[1]: Listening on libvirt logging daemon socket.
Dec  9 05:35:39 np0005551604 systemd[1]: Starting libvirt logging daemon admin socket...
Dec  9 05:35:39 np0005551604 systemd[1]: Listening on libvirt logging daemon admin socket.
Dec  9 05:35:39 np0005551604 systemd[1]: Starting libvirt logging daemon...
Dec  9 05:35:39 np0005551604 systemd[1]: Started libvirt logging daemon.
Dec  9 05:35:40 np0005551604 python3.9[155314]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:35:40 np0005551604 systemd[1]: Reloading.
Dec  9 05:35:40 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:35:40 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:35:40 np0005551604 systemd[1]: Starting libvirt nodedev daemon socket...
Dec  9 05:35:40 np0005551604 systemd[1]: Listening on libvirt nodedev daemon socket.
Dec  9 05:35:40 np0005551604 systemd[1]: Starting libvirt nodedev daemon admin socket...
Dec  9 05:35:40 np0005551604 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Dec  9 05:35:40 np0005551604 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Dec  9 05:35:40 np0005551604 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Dec  9 05:35:40 np0005551604 systemd[1]: Starting libvirt nodedev daemon...
Dec  9 05:35:40 np0005551604 systemd[1]: Started libvirt nodedev daemon.
Dec  9 05:35:41 np0005551604 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Dec  9 05:35:41 np0005551604 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Dec  9 05:35:41 np0005551604 python3.9[155530]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:35:41 np0005551604 systemd[1]: Reloading.
Dec  9 05:35:41 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:35:41 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:35:41 np0005551604 systemd[1]: Starting libvirt proxy daemon admin socket...
Dec  9 05:35:41 np0005551604 systemd[1]: Starting libvirt proxy daemon read-only socket...
Dec  9 05:35:41 np0005551604 systemd[1]: Listening on libvirt proxy daemon admin socket.
Dec  9 05:35:41 np0005551604 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Dec  9 05:35:41 np0005551604 systemd[1]: Starting libvirt proxy daemon...
Dec  9 05:35:41 np0005551604 systemd[1]: Started libvirt proxy daemon.
Dec  9 05:35:41 np0005551604 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Dec  9 05:35:41 np0005551604 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Dec  9 05:35:42 np0005551604 python3.9[155750]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:35:42 np0005551604 systemd[1]: Reloading.
Dec  9 05:35:42 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:35:42 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:35:42 np0005551604 setroubleshoot[155501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3f89051a-4810-4af9-9a87-e4ecee2c22f0
Dec  9 05:35:42 np0005551604 systemd[1]: Listening on libvirt locking daemon socket.
Dec  9 05:35:42 np0005551604 systemd[1]: Starting libvirt QEMU daemon socket...
Dec  9 05:35:42 np0005551604 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Dec  9 05:35:42 np0005551604 systemd[1]: Starting Virtual Machine and Container Registration Service...
Dec  9 05:35:42 np0005551604 systemd[1]: Listening on libvirt QEMU daemon socket.
Dec  9 05:35:42 np0005551604 systemd[1]: Starting libvirt QEMU daemon admin socket...
Dec  9 05:35:43 np0005551604 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Dec  9 05:35:43 np0005551604 systemd[1]: Started Virtual Machine and Container Registration Service.
Dec  9 05:35:43 np0005551604 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Dec  9 05:35:43 np0005551604 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Dec  9 05:35:43 np0005551604 setroubleshoot[155501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  9 05:35:43 np0005551604 systemd[1]: Starting libvirt QEMU daemon...
Dec  9 05:35:43 np0005551604 setroubleshoot[155501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 3f89051a-4810-4af9-9a87-e4ecee2c22f0
Dec  9 05:35:43 np0005551604 setroubleshoot[155501]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.#012#012*****  Plugin dac_override (91.4 confidence) suggests   **********************#012#012If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system#012Then turn on full auditing to get path information about the offending file and generate the error again.#012Do#012#012Turn on full auditing#012# auditctl -w /etc/shadow -p w#012Try to recreate AVC. Then execute#012# ausearch -m avc -ts recent#012If you see PATH record check ownership/permissions on file, and fix it,#012otherwise report as a bugzilla.#012#012*****  Plugin catchall (9.59 confidence) suggests   **************************#012#012If you believe that virtlogd should have the dac_read_search capability by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd#012# semodule -X 300 -i my-virtlogd.pp#012
Dec  9 05:35:43 np0005551604 systemd[1]: Started libvirt QEMU daemon.
Dec  9 05:35:43 np0005551604 python3.9[155965]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:35:43 np0005551604 systemd[1]: Reloading.
Dec  9 05:35:44 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:35:44 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:35:44 np0005551604 systemd[1]: Starting libvirt secret daemon socket...
Dec  9 05:35:44 np0005551604 systemd[1]: Listening on libvirt secret daemon socket.
Dec  9 05:35:44 np0005551604 systemd[1]: Starting libvirt secret daemon admin socket...
Dec  9 05:35:44 np0005551604 systemd[1]: Starting libvirt secret daemon read-only socket...
Dec  9 05:35:44 np0005551604 systemd[1]: Listening on libvirt secret daemon admin socket.
Dec  9 05:35:44 np0005551604 systemd[1]: Listening on libvirt secret daemon read-only socket.
Dec  9 05:35:44 np0005551604 systemd[1]: Starting libvirt secret daemon...
Dec  9 05:35:44 np0005551604 systemd[1]: Started libvirt secret daemon.
Dec  9 05:35:45 np0005551604 python3.9[156178]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:45 np0005551604 python3.9[156330]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  9 05:35:46 np0005551604 python3.9[156482]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:47 np0005551604 python3.9[156605]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276546.1248589-1120-128413579699312/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:48 np0005551604 python3.9[156757]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:48 np0005551604 python3.9[156909]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:49 np0005551604 python3.9[156987]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:50 np0005551604 python3.9[157139]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:50 np0005551604 podman[157189]: 2025-12-09 10:35:50.386650963 +0000 UTC m=+0.070643352 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 05:35:50 np0005551604 python3.9[157234]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.pv5bv7yw recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:51 np0005551604 python3.9[157387]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:51 np0005551604 python3.9[157465]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:52 np0005551604 python3.9[157617]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:35:53 np0005551604 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Dec  9 05:35:53 np0005551604 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.046s CPU time.
Dec  9 05:35:53 np0005551604 systemd[1]: setroubleshootd.service: Deactivated successfully.
Dec  9 05:35:53 np0005551604 python3[157770]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  9 05:35:54 np0005551604 python3.9[157922]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:54 np0005551604 python3.9[158000]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:55 np0005551604 python3.9[158152]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:55 np0005551604 python3.9[158230]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:56 np0005551604 podman[158254]: 2025-12-09 10:35:56.026287095 +0000 UTC m=+0.178376607 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:35:56 np0005551604 python3.9[158408]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:57 np0005551604 python3.9[158486]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:57 np0005551604 python3.9[158640]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:58 np0005551604 python3.9[158718]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:35:59 np0005551604 python3.9[158870]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:35:59 np0005551604 python3.9[158995]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276558.5821588-1245-55842328321180/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:00 np0005551604 python3.9[159147]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:01 np0005551604 python3.9[159299]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:36:02 np0005551604 python3.9[159454]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:02 np0005551604 python3.9[159606]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:36:03 np0005551604 python3.9[159759]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:36:04 np0005551604 python3.9[159913]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:36:05 np0005551604 python3.9[160068]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:05 np0005551604 python3.9[160220]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:36:06 np0005551604 python3.9[160343]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276565.25584-1317-85491977375534/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:07 np0005551604 python3.9[160495]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:36:07 np0005551604 python3.9[160618]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276566.5768652-1332-171036422432515/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:08 np0005551604 python3.9[160770]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:36:08 np0005551604 python3.9[160893]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276567.8723955-1347-77985509808563/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:09 np0005551604 python3.9[161045]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:36:09 np0005551604 systemd[1]: Reloading.
Dec  9 05:36:09 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:36:09 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:36:09 np0005551604 systemd[1]: Reached target edpm_libvirt.target.
Dec  9 05:36:10 np0005551604 python3.9[161236]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Dec  9 05:36:10 np0005551604 systemd[1]: Reloading.
Dec  9 05:36:10 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:36:10 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:36:11 np0005551604 systemd[1]: Reloading.
Dec  9 05:36:11 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:36:11 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:36:11 np0005551604 systemd[1]: session-23.scope: Deactivated successfully.
Dec  9 05:36:11 np0005551604 systemd[1]: session-23.scope: Consumed 3min 36.926s CPU time.
Dec  9 05:36:11 np0005551604 systemd-logind[806]: Session 23 logged out. Waiting for processes to exit.
Dec  9 05:36:11 np0005551604 systemd-logind[806]: Removed session 23.
Dec  9 05:36:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:36:16.963 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:36:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:36:16.966 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.004s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:36:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:36:16.967 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:36:17 np0005551604 systemd-logind[806]: New session 24 of user zuul.
Dec  9 05:36:17 np0005551604 systemd[1]: Started Session 24 of User zuul.
Dec  9 05:36:18 np0005551604 python3.9[161487]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:36:19 np0005551604 python3.9[161641]: ansible-ansible.builtin.service_facts Invoked
Dec  9 05:36:19 np0005551604 network[161658]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  9 05:36:19 np0005551604 network[161659]: 'network-scripts' will be removed from distribution in near future.
Dec  9 05:36:19 np0005551604 network[161660]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  9 05:36:20 np0005551604 podman[161667]: 2025-12-09 10:36:20.54953951 +0000 UTC m=+0.068854074 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec  9 05:36:24 np0005551604 python3.9[161949]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:36:25 np0005551604 python3.9[162033]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:36:26 np0005551604 podman[162037]: 2025-12-09 10:36:26.953027537 +0000 UTC m=+0.106251651 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  9 05:36:31 np0005551604 python3.9[162217]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:36:32 np0005551604 python3.9[162369]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:36:33 np0005551604 python3.9[162522]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:36:34 np0005551604 python3.9[162674]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:36:34 np0005551604 python3.9[162827]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:36:35 np0005551604 python3.9[162950]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276594.3733435-95-232495538045493/.source.iscsi _original_basename=.b6im0d01 follow=False checksum=d96ef98e9faa79049d0821e0c39e20fc3cf21a0b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:36 np0005551604 python3.9[163102]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:37 np0005551604 python3.9[163254]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:37 np0005551604 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 05:36:37 np0005551604 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 05:36:38 np0005551604 python3.9[163407]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:36:38 np0005551604 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Dec  9 05:36:39 np0005551604 python3.9[163563]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:36:39 np0005551604 systemd[1]: Reloading.
Dec  9 05:36:39 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:36:39 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:36:40 np0005551604 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  9 05:36:40 np0005551604 systemd[1]: Starting Open-iSCSI...
Dec  9 05:36:40 np0005551604 kernel: Loading iSCSI transport class v2.0-870.
Dec  9 05:36:40 np0005551604 systemd[1]: Started Open-iSCSI.
Dec  9 05:36:40 np0005551604 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Dec  9 05:36:40 np0005551604 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Dec  9 05:36:41 np0005551604 python3.9[163762]: ansible-ansible.builtin.service_facts Invoked
Dec  9 05:36:41 np0005551604 network[163779]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  9 05:36:41 np0005551604 network[163780]: 'network-scripts' will be removed from distribution in near future.
Dec  9 05:36:41 np0005551604 network[163781]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  9 05:36:46 np0005551604 python3.9[164054]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  9 05:36:47 np0005551604 python3.9[164206]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Dec  9 05:36:48 np0005551604 python3.9[164362]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:36:49 np0005551604 python3.9[164485]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276608.3093913-172-161440376717828/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:50 np0005551604 python3.9[164637]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:50 np0005551604 podman[164714]: 2025-12-09 10:36:50.98614361 +0000 UTC m=+0.112131840 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  9 05:36:51 np0005551604 python3.9[164806]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:36:51 np0005551604 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  9 05:36:51 np0005551604 systemd[1]: Stopped Load Kernel Modules.
Dec  9 05:36:51 np0005551604 systemd[1]: Stopping Load Kernel Modules...
Dec  9 05:36:51 np0005551604 systemd[1]: Starting Load Kernel Modules...
Dec  9 05:36:51 np0005551604 systemd[1]: Finished Load Kernel Modules.
Dec  9 05:36:52 np0005551604 python3.9[164962]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:36:53 np0005551604 python3.9[165114]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:36:54 np0005551604 python3.9[165266]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:36:54 np0005551604 python3.9[165418]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:36:55 np0005551604 python3.9[165541]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276614.2716906-230-241014184696775/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:56 np0005551604 python3.9[165693]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:36:56 np0005551604 python3.9[165846]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:57 np0005551604 podman[165972]: 2025-12-09 10:36:57.833246875 +0000 UTC m=+0.123684844 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  9 05:36:57 np0005551604 python3.9[166021]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:58 np0005551604 python3.9[166178]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:36:59 np0005551604 python3.9[166330]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:00 np0005551604 python3.9[166482]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:01 np0005551604 python3.9[166634]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:01 np0005551604 python3.9[166786]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:02 np0005551604 python3.9[166938]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:37:03 np0005551604 python3.9[167092]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/multipath/.multipath_restart_required state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:04 np0005551604 python3.9[167244]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:37:05 np0005551604 python3.9[167396]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:37:06 np0005551604 python3.9[167474]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:37:06 np0005551604 python3.9[167626]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:37:07 np0005551604 python3.9[167704]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:37:08 np0005551604 python3.9[167856]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:09 np0005551604 python3.9[168008]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:37:09 np0005551604 python3.9[168086]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:10 np0005551604 python3.9[168238]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:37:11 np0005551604 python3.9[168316]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:12 np0005551604 python3.9[168468]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:37:12 np0005551604 systemd[1]: Reloading.
Dec  9 05:37:12 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:37:12 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:37:13 np0005551604 python3.9[168657]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:37:14 np0005551604 python3.9[168735]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:15 np0005551604 python3.9[168887]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:37:15 np0005551604 python3.9[168965]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:16 np0005551604 python3.9[169117]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:37:16 np0005551604 systemd[1]: Reloading.
Dec  9 05:37:16 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:37:16 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:37:16 np0005551604 systemd[1]: Starting Create netns directory...
Dec  9 05:37:16 np0005551604 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Dec  9 05:37:16 np0005551604 systemd[1]: netns-placeholder.service: Deactivated successfully.
Dec  9 05:37:16 np0005551604 systemd[1]: Finished Create netns directory.
Dec  9 05:37:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:37:16.963 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:37:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:37:16.965 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:37:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:37:16.965 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:37:17 np0005551604 python3.9[169310]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:37:18 np0005551604 python3.9[169462]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/multipathd/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:37:19 np0005551604 python3.9[169585]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/multipathd/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276637.9584718-437-18884629550102/.source _original_basename=healthcheck follow=False checksum=af9d0c1c8f3cb0e30ce9609be9d5b01924d0d23f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:37:20 np0005551604 python3.9[169737]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:37:20 np0005551604 python3.9[169889]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/multipathd.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:37:21 np0005551604 podman[169984]: 2025-12-09 10:37:21.217603569 +0000 UTC m=+0.066332016 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:37:21 np0005551604 python3.9[170030]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/multipathd.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276640.2955046-462-6804839964574/.source.json _original_basename=.k5wzbzvu follow=False checksum=3f7959ee8ac9757398adcc451c3b416c957d7c14 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:22 np0005551604 python3.9[170183]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/multipathd state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:24 np0005551604 python3.9[170610]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/multipathd config_pattern=*.json debug=False
Dec  9 05:37:25 np0005551604 python3.9[170762]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  9 05:37:26 np0005551604 python3.9[170914]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Dec  9 05:37:28 np0005551604 podman[171065]: 2025-12-09 10:37:28.151596116 +0000 UTC m=+0.114534001 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:37:28 np0005551604 python3[171112]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/multipathd config_id=multipathd config_overrides={} config_patterns=*.json log_base_path=/var/log/containers/stdouts debug=False
Dec  9 05:37:28 np0005551604 podman[171154]: 2025-12-09 10:37:28.660809144 +0000 UTC m=+0.073097276 container create 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:37:28 np0005551604 podman[171154]: 2025-12-09 10:37:28.625868091 +0000 UTC m=+0.038156313 image pull bcd3898ac099c7fff3d2ff3fc32de931119ed36068f8a2617bd8fa95e51d1b81 quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  9 05:37:28 np0005551604 python3[171112]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name multipathd --conmon-pidfile /run/multipathd.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --healthcheck-command /openstack/healthcheck --label config_id=multipathd --label container_name=multipathd --label managed_by=edpm_ansible --label config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --volume /etc/hosts:/etc/hosts:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro --volume /dev/log:/dev/log --volume /var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro --volume /dev:/dev --volume /run/udev:/run/udev --volume /sys:/sys --volume /lib/modules:/lib/modules:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /var/lib/openstack/healthchecks/multipathd:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-multipathd:current-podified
Dec  9 05:37:29 np0005551604 python3.9[171346]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:37:30 np0005551604 python3.9[171500]: ansible-file Invoked with path=/etc/systemd/system/edpm_multipathd.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:31 np0005551604 python3.9[171576]: ansible-stat Invoked with path=/etc/systemd/system/edpm_multipathd_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:37:31 np0005551604 python3.9[171727]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276651.1743646-550-279337523479128/source dest=/etc/systemd/system/edpm_multipathd.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:32 np0005551604 python3.9[171803]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:37:32 np0005551604 systemd[1]: Reloading.
Dec  9 05:37:32 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:37:32 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:37:33 np0005551604 python3.9[171914]: ansible-systemd Invoked with state=restarted name=edpm_multipathd.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:37:33 np0005551604 systemd[1]: Reloading.
Dec  9 05:37:33 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:37:33 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:37:33 np0005551604 systemd[1]: Starting multipathd container...
Dec  9 05:37:33 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:37:33 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8791dbf415cc62bb7dfa0e9aec615bfb573f9c47e4809e4d87a7aae741087c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  9 05:37:33 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8791dbf415cc62bb7dfa0e9aec615bfb573f9c47e4809e4d87a7aae741087c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  9 05:37:33 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.
Dec  9 05:37:33 np0005551604 podman[171954]: 2025-12-09 10:37:33.85101657 +0000 UTC m=+0.143172236 container init 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  9 05:37:33 np0005551604 multipathd[171970]: + sudo -E kolla_set_configs
Dec  9 05:37:33 np0005551604 podman[171954]: 2025-12-09 10:37:33.888867545 +0000 UTC m=+0.181023151 container start 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202)
Dec  9 05:37:33 np0005551604 podman[171954]: multipathd
Dec  9 05:37:33 np0005551604 systemd[1]: Started multipathd container.
Dec  9 05:37:33 np0005551604 multipathd[171970]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  9 05:37:33 np0005551604 multipathd[171970]: INFO:__main__:Validating config file
Dec  9 05:37:33 np0005551604 multipathd[171970]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  9 05:37:33 np0005551604 multipathd[171970]: INFO:__main__:Writing out command to execute
Dec  9 05:37:33 np0005551604 multipathd[171970]: ++ cat /run_command
Dec  9 05:37:33 np0005551604 multipathd[171970]: + CMD='/usr/sbin/multipathd -d'
Dec  9 05:37:33 np0005551604 multipathd[171970]: + ARGS=
Dec  9 05:37:33 np0005551604 multipathd[171970]: + sudo kolla_copy_cacerts
Dec  9 05:37:33 np0005551604 multipathd[171970]: + [[ ! -n '' ]]
Dec  9 05:37:33 np0005551604 multipathd[171970]: + . kolla_extend_start
Dec  9 05:37:33 np0005551604 multipathd[171970]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  9 05:37:33 np0005551604 multipathd[171970]: Running command: '/usr/sbin/multipathd -d'
Dec  9 05:37:33 np0005551604 multipathd[171970]: + umask 0022
Dec  9 05:37:33 np0005551604 multipathd[171970]: + exec /usr/sbin/multipathd -d
Dec  9 05:37:33 np0005551604 podman[171976]: 2025-12-09 10:37:33.990480112 +0000 UTC m=+0.077690436 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=starting, health_failing_streak=1, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 05:37:33 np0005551604 systemd[1]: 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a-5269e27426dc50fc.service: Main process exited, code=exited, status=1/FAILURE
Dec  9 05:37:33 np0005551604 systemd[1]: 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a-5269e27426dc50fc.service: Failed with result 'exit-code'.
Dec  9 05:37:34 np0005551604 multipathd[171970]: 3278.862522 | --------start up--------
Dec  9 05:37:34 np0005551604 multipathd[171970]: 3278.862552 | read /etc/multipath.conf
Dec  9 05:37:34 np0005551604 multipathd[171970]: 3278.871968 | path checkers start up
Dec  9 05:37:34 np0005551604 python3.9[172160]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath/.multipath_restart_required follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:37:35 np0005551604 python3.9[172314]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps --filter volume=/etc/multipath.conf --format {{.Names}} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:37:36 np0005551604 python3.9[172478]: ansible-ansible.builtin.systemd Invoked with name=edpm_multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:37:36 np0005551604 systemd[1]: Stopping multipathd container...
Dec  9 05:37:36 np0005551604 multipathd[171970]: 3281.582186 | exit (signal)
Dec  9 05:37:36 np0005551604 multipathd[171970]: 3281.582522 | --------shut down-------
Dec  9 05:37:36 np0005551604 systemd[1]: libpod-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope: Deactivated successfully.
Dec  9 05:37:36 np0005551604 podman[172482]: 2025-12-09 10:37:36.780016097 +0000 UTC m=+0.102748700 container died 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 05:37:36 np0005551604 systemd[1]: 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a-5269e27426dc50fc.timer: Deactivated successfully.
Dec  9 05:37:36 np0005551604 systemd[1]: Stopped /usr/bin/podman healthcheck run 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.
Dec  9 05:37:36 np0005551604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a-userdata-shm.mount: Deactivated successfully.
Dec  9 05:37:36 np0005551604 systemd[1]: var-lib-containers-storage-overlay-0c8791dbf415cc62bb7dfa0e9aec615bfb573f9c47e4809e4d87a7aae741087c-merged.mount: Deactivated successfully.
Dec  9 05:37:36 np0005551604 podman[172482]: 2025-12-09 10:37:36.839693015 +0000 UTC m=+0.162425598 container cleanup 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:37:36 np0005551604 podman[172482]: multipathd
Dec  9 05:37:36 np0005551604 podman[172512]: multipathd
Dec  9 05:37:36 np0005551604 systemd[1]: edpm_multipathd.service: Deactivated successfully.
Dec  9 05:37:36 np0005551604 systemd[1]: Stopped multipathd container.
Dec  9 05:37:36 np0005551604 systemd[1]: Starting multipathd container...
Dec  9 05:37:37 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:37:37 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8791dbf415cc62bb7dfa0e9aec615bfb573f9c47e4809e4d87a7aae741087c/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  9 05:37:37 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/0c8791dbf415cc62bb7dfa0e9aec615bfb573f9c47e4809e4d87a7aae741087c/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  9 05:37:37 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.
Dec  9 05:37:37 np0005551604 podman[172525]: 2025-12-09 10:37:37.077398669 +0000 UTC m=+0.142121147 container init 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec  9 05:37:37 np0005551604 multipathd[172540]: + sudo -E kolla_set_configs
Dec  9 05:37:37 np0005551604 podman[172525]: 2025-12-09 10:37:37.106434915 +0000 UTC m=+0.171157403 container start 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.schema-version=1.0, config_id=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec  9 05:37:37 np0005551604 podman[172525]: multipathd
Dec  9 05:37:37 np0005551604 systemd[1]: Started multipathd container.
Dec  9 05:37:37 np0005551604 multipathd[172540]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  9 05:37:37 np0005551604 multipathd[172540]: INFO:__main__:Validating config file
Dec  9 05:37:37 np0005551604 multipathd[172540]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  9 05:37:37 np0005551604 multipathd[172540]: INFO:__main__:Writing out command to execute
Dec  9 05:37:37 np0005551604 multipathd[172540]: ++ cat /run_command
Dec  9 05:37:37 np0005551604 multipathd[172540]: + CMD='/usr/sbin/multipathd -d'
Dec  9 05:37:37 np0005551604 multipathd[172540]: + ARGS=
Dec  9 05:37:37 np0005551604 multipathd[172540]: + sudo kolla_copy_cacerts
Dec  9 05:37:37 np0005551604 multipathd[172540]: + [[ ! -n '' ]]
Dec  9 05:37:37 np0005551604 multipathd[172540]: + . kolla_extend_start
Dec  9 05:37:37 np0005551604 multipathd[172540]: Running command: '/usr/sbin/multipathd -d'
Dec  9 05:37:37 np0005551604 multipathd[172540]: + echo 'Running command: '\''/usr/sbin/multipathd -d'\'''
Dec  9 05:37:37 np0005551604 multipathd[172540]: + umask 0022
Dec  9 05:37:37 np0005551604 multipathd[172540]: + exec /usr/sbin/multipathd -d
Dec  9 05:37:37 np0005551604 multipathd[172540]: 3282.032835 | --------start up--------
Dec  9 05:37:37 np0005551604 multipathd[172540]: 3282.032851 | read /etc/multipath.conf
Dec  9 05:37:37 np0005551604 multipathd[172540]: 3282.038844 | path checkers start up
Dec  9 05:37:37 np0005551604 podman[172547]: 2025-12-09 10:37:37.204505503 +0000 UTC m=+0.086822763 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:37:37 np0005551604 python3.9[172730]: ansible-ansible.builtin.file Invoked with path=/etc/multipath/.multipath_restart_required state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:39 np0005551604 python3.9[172882]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Dec  9 05:37:40 np0005551604 python3.9[173034]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Dec  9 05:37:40 np0005551604 kernel: Key type psk registered
Dec  9 05:37:40 np0005551604 systemd[1]: virtnodedevd.service: Deactivated successfully.
Dec  9 05:37:41 np0005551604 python3.9[173198]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:37:41 np0005551604 python3.9[173321]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276660.688303-630-50489531492381/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:41 np0005551604 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  9 05:37:42 np0005551604 python3.9[173474]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:43 np0005551604 systemd[1]: virtqemud.service: Deactivated successfully.
Dec  9 05:37:44 np0005551604 python3.9[173627]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:37:44 np0005551604 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Dec  9 05:37:44 np0005551604 systemd[1]: Stopped Load Kernel Modules.
Dec  9 05:37:44 np0005551604 systemd[1]: Stopping Load Kernel Modules...
Dec  9 05:37:44 np0005551604 systemd[1]: Starting Load Kernel Modules...
Dec  9 05:37:44 np0005551604 systemd[1]: Finished Load Kernel Modules.
Dec  9 05:37:44 np0005551604 systemd[1]: virtsecretd.service: Deactivated successfully.
Dec  9 05:37:45 np0005551604 python3.9[173784]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:37:50 np0005551604 systemd[1]: Reloading.
Dec  9 05:37:50 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:37:50 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:37:50 np0005551604 systemd[1]: Reloading.
Dec  9 05:37:50 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:37:50 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:37:51 np0005551604 systemd-logind[806]: Watching system buttons on /dev/input/event0 (Power Button)
Dec  9 05:37:51 np0005551604 systemd-logind[806]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Dec  9 05:37:51 np0005551604 podman[173893]: 2025-12-09 10:37:51.427798679 +0000 UTC m=+0.063607080 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:37:51 np0005551604 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Dec  9 05:37:51 np0005551604 systemd[1]: Starting man-db-cache-update.service...
Dec  9 05:37:51 np0005551604 systemd[1]: Reloading.
Dec  9 05:37:51 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:37:51 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:37:51 np0005551604 systemd[1]: Queuing reload/restart jobs for marked units…
Dec  9 05:37:52 np0005551604 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Dec  9 05:37:52 np0005551604 systemd[1]: Finished man-db-cache-update.service.
Dec  9 05:37:52 np0005551604 systemd[1]: man-db-cache-update.service: Consumed 1.550s CPU time.
Dec  9 05:37:52 np0005551604 systemd[1]: run-r22cf26c6cdbe4eeab63dde32618f0386.service: Deactivated successfully.
Dec  9 05:37:53 np0005551604 python3.9[175268]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:37:53 np0005551604 systemd[1]: Stopping Open-iSCSI...
Dec  9 05:37:53 np0005551604 iscsid[163602]: iscsid shutting down.
Dec  9 05:37:53 np0005551604 systemd[1]: iscsid.service: Deactivated successfully.
Dec  9 05:37:53 np0005551604 systemd[1]: Stopped Open-iSCSI.
Dec  9 05:37:53 np0005551604 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Dec  9 05:37:53 np0005551604 systemd[1]: Starting Open-iSCSI...
Dec  9 05:37:53 np0005551604 systemd[1]: Started Open-iSCSI.
Dec  9 05:37:53 np0005551604 python3.9[175423]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:37:54 np0005551604 python3.9[175579]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:37:55 np0005551604 python3.9[175731]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:37:55 np0005551604 systemd[1]: Reloading.
Dec  9 05:37:55 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:37:55 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:37:56 np0005551604 python3.9[175916]: ansible-ansible.builtin.service_facts Invoked
Dec  9 05:37:56 np0005551604 network[175933]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  9 05:37:56 np0005551604 network[175934]: 'network-scripts' will be removed from distribution in near future.
Dec  9 05:37:56 np0005551604 network[175935]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  9 05:37:58 np0005551604 podman[175958]: 2025-12-09 10:37:58.936252048 +0000 UTC m=+0.096418322 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS)
Dec  9 05:38:01 np0005551604 python3.9[176235]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:38:02 np0005551604 python3.9[176390]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:38:03 np0005551604 python3.9[176543]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:38:03 np0005551604 python3.9[176696]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:38:04 np0005551604 python3.9[176849]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:38:05 np0005551604 python3.9[177002]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:38:06 np0005551604 python3.9[177155]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:38:07 np0005551604 python3.9[177308]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:38:07 np0005551604 podman[177433]: 2025-12-09 10:38:07.813091384 +0000 UTC m=+0.061094929 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  9 05:38:07 np0005551604 python3.9[177481]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:08 np0005551604 python3.9[177633]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:09 np0005551604 python3.9[177785]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:10 np0005551604 python3.9[177937]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:10 np0005551604 python3.9[178089]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:11 np0005551604 python3.9[178241]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:11 np0005551604 python3.9[178393]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:12 np0005551604 python3.9[178545]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:13 np0005551604 python3.9[178697]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:13 np0005551604 python3.9[178849]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:14 np0005551604 python3.9[179001]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:15 np0005551604 python3.9[179153]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:15 np0005551604 python3.9[179305]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:16 np0005551604 python3.9[179457]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:38:16.965 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:38:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:38:16.966 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:38:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:38:16.967 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:38:17 np0005551604 python3.9[179609]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:17 np0005551604 python3.9[179761]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:18 np0005551604 python3.9[179913]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:38:19 np0005551604 python3.9[180065]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  9 05:38:20 np0005551604 python3.9[180217]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:38:20 np0005551604 systemd[1]: Reloading.
Dec  9 05:38:20 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:38:20 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:38:21 np0005551604 podman[180301]: 2025-12-09 10:38:21.896569268 +0000 UTC m=+0.057341843 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:38:22 np0005551604 python3.9[180425]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:38:22 np0005551604 python3.9[180578]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:38:23 np0005551604 python3.9[180731]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:38:24 np0005551604 python3.9[180884]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:38:24 np0005551604 python3.9[181037]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:38:25 np0005551604 python3.9[181190]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:38:26 np0005551604 python3.9[181343]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:38:27 np0005551604 python3.9[181496]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:38:28 np0005551604 python3.9[181649]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:29 np0005551604 podman[181801]: 2025-12-09 10:38:29.138361934 +0000 UTC m=+0.136258012 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 05:38:29 np0005551604 python3.9[181802]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:29 np0005551604 python3.9[181980]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:30 np0005551604 python3.9[182132]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:31 np0005551604 python3.9[182284]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:31 np0005551604 python3.9[182436]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:32 np0005551604 python3.9[182588]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:33 np0005551604 python3.9[182740]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:33 np0005551604 python3.9[182892]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:34 np0005551604 python3.9[183046]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:38 np0005551604 podman[183071]: 2025-12-09 10:38:38.919474272 +0000 UTC m=+0.078320420 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=multipathd, managed_by=edpm_ansible, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:38:40 np0005551604 python3.9[183218]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Dec  9 05:38:41 np0005551604 python3.9[183371]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  9 05:38:41 np0005551604 python3.9[183529]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  9 05:38:42 np0005551604 systemd-logind[806]: New session 25 of user zuul.
Dec  9 05:38:42 np0005551604 systemd[1]: Started Session 25 of User zuul.
Dec  9 05:38:43 np0005551604 systemd[1]: session-25.scope: Deactivated successfully.
Dec  9 05:38:43 np0005551604 systemd-logind[806]: Session 25 logged out. Waiting for processes to exit.
Dec  9 05:38:43 np0005551604 systemd-logind[806]: Removed session 25.
Dec  9 05:38:44 np0005551604 python3.9[183715]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:38:44 np0005551604 python3.9[183836]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276723.6994681-1229-270859269794157/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:45 np0005551604 python3.9[183986]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:38:45 np0005551604 python3.9[184062]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:46 np0005551604 python3.9[184212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:38:46 np0005551604 python3.9[184333]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276725.9046667-1229-192809878384502/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:47 np0005551604 python3.9[184483]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:38:48 np0005551604 python3.9[184604]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276727.1456003-1229-8382655651154/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:49 np0005551604 python3.9[184754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:38:49 np0005551604 python3.9[184875]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276728.5009077-1229-37444260984569/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:50 np0005551604 python3.9[185025]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:38:50 np0005551604 python3.9[185146]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276729.7333763-1229-162208212927259/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:51 np0005551604 python3.9[185298]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:52 np0005551604 podman[185422]: 2025-12-09 10:38:52.142384751 +0000 UTC m=+0.070016259 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  9 05:38:52 np0005551604 python3.9[185468]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:38:53 np0005551604 python3.9[185622]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:38:53 np0005551604 python3.9[185774]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:38:54 np0005551604 python3.9[185897]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1765276733.2309291-1336-239197011667705/.source _original_basename=.c_wcksdd follow=False checksum=bbff628a6ea4f12994b66e810982d63a67a943ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Dec  9 05:38:55 np0005551604 python3.9[186049]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:38:55 np0005551604 python3.9[186201]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:38:56 np0005551604 python3.9[186322]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276735.299076-1362-126731258462830/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=211ffd0bca4b407eb4de45a749ef70116a7806fd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:57 np0005551604 python3.9[186472]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:38:57 np0005551604 python3.9[186593]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276736.7302845-1377-156590338410160/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:38:58 np0005551604 python3.9[186745]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Dec  9 05:38:59 np0005551604 podman[186869]: 2025-12-09 10:38:59.304880681 +0000 UTC m=+0.090406565 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  9 05:38:59 np0005551604 python3.9[186920]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  9 05:39:00 np0005551604 python3[187076]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json log_base_path=/var/log/containers/stdouts debug=False
Dec  9 05:39:00 np0005551604 podman[187111]: 2025-12-09 10:39:00.631150221 +0000 UTC m=+0.074252267 container create 73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute_init, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, io.buildah.version=1.41.3)
Dec  9 05:39:00 np0005551604 podman[187111]: 2025-12-09 10:39:00.595286373 +0000 UTC m=+0.038388499 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  9 05:39:00 np0005551604 python3[187076]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Dec  9 05:39:01 np0005551604 python3.9[187301]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:39:02 np0005551604 python3.9[187455]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Dec  9 05:39:03 np0005551604 python3.9[187607]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  9 05:39:04 np0005551604 python3[187759]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  9 05:39:04 np0005551604 podman[187797]: 2025-12-09 10:39:04.614880566 +0000 UTC m=+0.077641860 container create 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, container_name=nova_compute, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:39:04 np0005551604 podman[187797]: 2025-12-09 10:39:04.577873817 +0000 UTC m=+0.040635201 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Dec  9 05:39:04 np0005551604 python3[187759]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath:z --volume /etc/multipath.conf:/etc/multipath.conf:ro --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Dec  9 05:39:05 np0005551604 python3.9[187989]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:39:06 np0005551604 python3.9[188143]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:07 np0005551604 python3.9[188294]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276746.7124631-1469-271417735294907/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:07 np0005551604 python3.9[188370]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:39:07 np0005551604 systemd[1]: Reloading.
Dec  9 05:39:07 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:39:07 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:39:08 np0005551604 python3.9[188482]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:39:08 np0005551604 systemd[1]: Reloading.
Dec  9 05:39:08 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:39:08 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:39:09 np0005551604 systemd[1]: Starting nova_compute container...
Dec  9 05:39:09 np0005551604 podman[188520]: 2025-12-09 10:39:09.234482359 +0000 UTC m=+0.087462233 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:39:09 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:39:09 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:09 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:09 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:09 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:09 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:09 np0005551604 podman[188522]: 2025-12-09 10:39:09.268281059 +0000 UTC m=+0.114906417 container init 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 05:39:09 np0005551604 podman[188522]: 2025-12-09 10:39:09.273542586 +0000 UTC m=+0.120167924 container start 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, container_name=nova_compute)
Dec  9 05:39:09 np0005551604 nova_compute[188553]: + sudo -E kolla_set_configs
Dec  9 05:39:09 np0005551604 podman[188522]: nova_compute
Dec  9 05:39:09 np0005551604 systemd[1]: Started nova_compute container.
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Validating config file
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Copying service configuration files
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Deleting /etc/ceph
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Creating directory /etc/ceph
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /etc/ceph
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Writing out command to execute
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  9 05:39:09 np0005551604 nova_compute[188553]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  9 05:39:09 np0005551604 nova_compute[188553]: ++ cat /run_command
Dec  9 05:39:09 np0005551604 nova_compute[188553]: + CMD=nova-compute
Dec  9 05:39:09 np0005551604 nova_compute[188553]: + ARGS=
Dec  9 05:39:09 np0005551604 nova_compute[188553]: + sudo kolla_copy_cacerts
Dec  9 05:39:09 np0005551604 nova_compute[188553]: + [[ ! -n '' ]]
Dec  9 05:39:09 np0005551604 nova_compute[188553]: + . kolla_extend_start
Dec  9 05:39:09 np0005551604 nova_compute[188553]: Running command: 'nova-compute'
Dec  9 05:39:09 np0005551604 nova_compute[188553]: + echo 'Running command: '\''nova-compute'\'''
Dec  9 05:39:09 np0005551604 nova_compute[188553]: + umask 0022
Dec  9 05:39:09 np0005551604 nova_compute[188553]: + exec nova-compute
Dec  9 05:39:10 np0005551604 python3.9[188716]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:39:11 np0005551604 python3.9[188866]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:39:11 np0005551604 nova_compute[188553]: 2025-12-09 10:39:11.358 188558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  9 05:39:11 np0005551604 nova_compute[188553]: 2025-12-09 10:39:11.358 188558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  9 05:39:11 np0005551604 nova_compute[188553]: 2025-12-09 10:39:11.358 188558 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  9 05:39:11 np0005551604 nova_compute[188553]: 2025-12-09 10:39:11.358 188558 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  9 05:39:11 np0005551604 nova_compute[188553]: 2025-12-09 10:39:11.494 188558 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 05:39:11 np0005551604 nova_compute[188553]: 2025-12-09 10:39:11.526 188558 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.032s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 05:39:11 np0005551604 nova_compute[188553]: 2025-12-09 10:39:11.526 188558 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  9 05:39:11 np0005551604 python3.9[189020]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.182 188558 INFO nova.virt.driver [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.317 188558 INFO nova.compute.provider_config [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.337 188558 DEBUG oslo_concurrency.lockutils [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.337 188558 DEBUG oslo_concurrency.lockutils [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_concurrency.lockutils [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.338 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.339 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.340 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.341 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.342 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.343 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.344 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.345 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.346 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.347 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.348 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.349 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.350 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.351 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.351 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.351 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.351 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.351 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.352 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.353 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.354 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.355 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.356 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.357 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.358 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.359 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.360 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.361 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.362 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.363 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.364 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.365 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.366 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.367 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.368 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.369 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.370 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.371 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.372 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.373 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.374 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.375 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.376 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.377 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.378 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.379 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.380 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.381 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.381 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.381 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.381 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.381 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.382 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.383 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.384 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.385 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.386 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.387 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.388 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.389 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.390 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.390 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.390 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.390 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.390 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.391 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.392 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.393 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.394 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.395 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.396 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.397 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.398 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.399 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.400 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.401 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.402 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.403 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.404 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.405 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.406 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.407 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.408 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.409 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 WARNING oslo_config.cfg [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  9 05:39:12 np0005551604 nova_compute[188553]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  9 05:39:12 np0005551604 nova_compute[188553]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  9 05:39:12 np0005551604 nova_compute[188553]: and ``live_migration_inbound_addr`` respectively.
Dec  9 05:39:12 np0005551604 nova_compute[188553]: ).  Its value may be silently ignored in the future.#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.410 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.411 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.412 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.413 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.414 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.415 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.416 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.417 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.418 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.419 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.420 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.421 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.422 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.423 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.424 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.425 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.426 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.427 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.428 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.429 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.430 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.431 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.432 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.433 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.434 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.435 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.436 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.437 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.438 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.439 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.440 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.441 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.442 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.443 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.444 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.445 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.446 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.447 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.448 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.449 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.450 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.451 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.452 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.453 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.454 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.455 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.456 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.457 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.458 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.459 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.460 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.461 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.462 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.463 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.464 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.465 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.466 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.467 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.468 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.469 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.470 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.471 188558 DEBUG oslo_service.service [None req-d4caec34-e0ff-4873-ab95-33afe6c4351b - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.472 188558 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.488 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.488 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.489 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.489 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  9 05:39:12 np0005551604 systemd[1]: Starting libvirt QEMU daemon...
Dec  9 05:39:12 np0005551604 systemd[1]: Started libvirt QEMU daemon.
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.594 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f0f91a287f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.599 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f0f91a287f0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.601 188558 INFO nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.623 188558 WARNING nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  9 05:39:12 np0005551604 nova_compute[188553]: 2025-12-09 10:39:12.624 188558 DEBUG nova.virt.libvirt.volume.mount [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  9 05:39:13 np0005551604 python3.9[189224]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  9 05:39:13 np0005551604 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 05:39:13 np0005551604 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.573 188558 INFO nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Libvirt host capabilities <capabilities>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <host>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <uuid>6aaf5123-0bdb-461d-92bb-b40c4bea282b</uuid>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <cpu>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <arch>x86_64</arch>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model>EPYC-Rome-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <vendor>AMD</vendor>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <microcode version='16777317'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <signature family='23' model='49' stepping='0'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='x2apic'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='tsc-deadline'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='osxsave'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='hypervisor'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='tsc_adjust'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='spec-ctrl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='stibp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='arch-capabilities'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='cmp_legacy'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='topoext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='virt-ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='lbrv'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='tsc-scale'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='vmcb-clean'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='pause-filter'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='pfthreshold'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='svme-addr-chk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='rdctl-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='skip-l1dfl-vmentry'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='mds-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature name='pschange-mc-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <pages unit='KiB' size='4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <pages unit='KiB' size='2048'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <pages unit='KiB' size='1048576'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </cpu>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <power_management>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <suspend_mem/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <suspend_disk/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <suspend_hybrid/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </power_management>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <iommu support='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <migration_features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <live/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <uri_transports>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <uri_transport>tcp</uri_transport>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <uri_transport>rdma</uri_transport>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </uri_transports>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </migration_features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <topology>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <cells num='1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <cell id='0'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:          <memory unit='KiB'>7864304</memory>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:          <pages unit='KiB' size='4'>1966076</pages>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:          <pages unit='KiB' size='2048'>0</pages>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:          <distances>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:            <sibling id='0' value='10'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:          </distances>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:          <cpus num='8'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:          </cpus>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        </cell>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </cells>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </topology>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <cache>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </cache>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <secmodel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model>selinux</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <doi>0</doi>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </secmodel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <secmodel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model>dac</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <doi>0</doi>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </secmodel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </host>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <guest>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <os_type>hvm</os_type>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <arch name='i686'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <wordsize>32</wordsize>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <domain type='qemu'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <domain type='kvm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </arch>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <pae/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <nonpae/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <acpi default='on' toggle='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <apic default='on' toggle='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <cpuselection/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <deviceboot/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <disksnapshot default='on' toggle='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <externalSnapshot/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </guest>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <guest>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <os_type>hvm</os_type>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <arch name='x86_64'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <wordsize>64</wordsize>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <domain type='qemu'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <domain type='kvm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </arch>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <acpi default='on' toggle='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <apic default='on' toggle='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <cpuselection/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <deviceboot/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <disksnapshot default='on' toggle='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <externalSnapshot/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </guest>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 
Dec  9 05:39:13 np0005551604 nova_compute[188553]: </capabilities>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: #033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.583 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.603 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  9 05:39:13 np0005551604 nova_compute[188553]: <domainCapabilities>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <path>/usr/libexec/qemu-kvm</path>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <domain>kvm</domain>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <arch>i686</arch>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <vcpu max='4096'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <iothreads supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <os supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <enum name='firmware'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <loader supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>rom</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pflash</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='readonly'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>yes</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>no</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='secure'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>no</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </loader>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </os>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <cpu>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='host-passthrough' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='hostPassthroughMigratable'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>on</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>off</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='maximum' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='maximumMigratable'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>on</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>off</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='host-model' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <vendor>AMD</vendor>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='x2apic'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc-deadline'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='hypervisor'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc_adjust'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='spec-ctrl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='stibp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='cmp_legacy'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='overflow-recov'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='succor'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='amd-ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='virt-ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='lbrv'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc-scale'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='vmcb-clean'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='flushbyasid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='pause-filter'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='pfthreshold'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='svme-addr-chk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='disable' name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='custom' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Dhyana-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Genoa'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='auto-ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Genoa-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='auto-ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-128'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-256'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-512'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v6'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v7'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='KnightsMill'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512er'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512pf'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='KnightsMill-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512er'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512pf'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G4-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tbm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G5-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tbm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SierraForest'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cmpccxadd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SierraForest-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cmpccxadd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='athlon'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='athlon-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='core2duo'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='core2duo-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='coreduo'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='coreduo-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='n270'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='n270-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='phenom'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='phenom-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </cpu>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <memoryBacking supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <enum name='sourceType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>file</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>anonymous</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>memfd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </memoryBacking>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <devices>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <disk supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='diskDevice'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>disk</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>cdrom</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>floppy</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>lun</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='bus'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>fdc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>scsi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>sata</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-non-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </disk>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <graphics supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vnc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>egl-headless</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dbus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </graphics>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <video supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='modelType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vga</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>cirrus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>none</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>bochs</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>ramfb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </video>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <hostdev supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='mode'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>subsystem</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='startupPolicy'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>default</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>mandatory</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>requisite</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>optional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='subsysType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pci</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>scsi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='capsType'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='pciBackend'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </hostdev>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <rng supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-non-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>random</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>egd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>builtin</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </rng>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <filesystem supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='driverType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>path</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>handle</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtiofs</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </filesystem>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <tpm supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tpm-tis</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tpm-crb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>emulator</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>external</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendVersion'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>2.0</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </tpm>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <redirdev supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='bus'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </redirdev>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <channel supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pty</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>unix</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </channel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <crypto supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>qemu</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>builtin</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </crypto>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <interface supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>default</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>passt</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </interface>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <panic supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>isa</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>hyperv</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </panic>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <console supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>null</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pty</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dev</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>file</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pipe</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>stdio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>udp</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tcp</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>unix</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>qemu-vdagent</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dbus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </console>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </devices>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <gic supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <vmcoreinfo supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <genid supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <backingStoreInput supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <backup supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <async-teardown supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <ps2 supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <sev supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <sgx supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <hyperv supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='features'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>relaxed</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vapic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>spinlocks</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vpindex</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>runtime</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>synic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>stimer</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>reset</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vendor_id</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>frequencies</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>reenlightenment</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tlbflush</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>ipi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>avic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>emsr_bitmap</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>xmm_input</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <defaults>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <spinlocks>4095</spinlocks>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <stimer_direct>on</stimer_direct>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <tlbflush_direct>on</tlbflush_direct>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <tlbflush_extended>on</tlbflush_extended>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </defaults>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </hyperv>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <launchSecurity supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='sectype'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tdx</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </launchSecurity>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: </domainCapabilities>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.612 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  9 05:39:13 np0005551604 nova_compute[188553]: <domainCapabilities>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <path>/usr/libexec/qemu-kvm</path>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <domain>kvm</domain>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <arch>i686</arch>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <vcpu max='240'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <iothreads supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <os supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <enum name='firmware'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <loader supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>rom</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pflash</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='readonly'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>yes</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>no</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='secure'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>no</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </loader>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </os>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <cpu>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='host-passthrough' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='hostPassthroughMigratable'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>on</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>off</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='maximum' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='maximumMigratable'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>on</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>off</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='host-model' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <vendor>AMD</vendor>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='x2apic'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc-deadline'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='hypervisor'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc_adjust'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='spec-ctrl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='stibp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='cmp_legacy'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='overflow-recov'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='succor'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='amd-ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='virt-ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='lbrv'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc-scale'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='vmcb-clean'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='flushbyasid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='pause-filter'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='pfthreshold'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='svme-addr-chk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='disable' name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='custom' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Dhyana-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Genoa'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='auto-ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Genoa-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='auto-ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-128'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-256'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-512'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v6'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v7'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='KnightsMill'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512er'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512pf'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='KnightsMill-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512er'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512pf'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G4-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tbm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G5-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tbm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SierraForest'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cmpccxadd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SierraForest-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cmpccxadd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='athlon'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='athlon-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='core2duo'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='core2duo-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='coreduo'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='coreduo-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='n270'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='n270-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='phenom'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='phenom-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </cpu>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <memoryBacking supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <enum name='sourceType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>file</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>anonymous</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>memfd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </memoryBacking>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <devices>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <disk supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='diskDevice'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>disk</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>cdrom</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>floppy</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>lun</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='bus'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>ide</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>fdc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>scsi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>sata</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-non-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </disk>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <graphics supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vnc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>egl-headless</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dbus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </graphics>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <video supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='modelType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vga</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>cirrus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>none</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>bochs</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>ramfb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </video>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <hostdev supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='mode'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>subsystem</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='startupPolicy'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>default</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>mandatory</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>requisite</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>optional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='subsysType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pci</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>scsi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='capsType'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='pciBackend'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </hostdev>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <rng supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-non-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>random</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>egd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>builtin</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </rng>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <filesystem supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='driverType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>path</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>handle</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtiofs</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </filesystem>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <tpm supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tpm-tis</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tpm-crb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>emulator</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>external</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendVersion'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>2.0</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </tpm>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <redirdev supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='bus'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </redirdev>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <channel supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pty</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>unix</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </channel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <crypto supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>qemu</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>builtin</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </crypto>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <interface supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>default</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>passt</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </interface>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <panic supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>isa</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>hyperv</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </panic>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <console supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>null</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pty</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dev</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>file</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pipe</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>stdio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>udp</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tcp</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>unix</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>qemu-vdagent</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dbus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </console>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </devices>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <gic supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <vmcoreinfo supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <genid supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <backingStoreInput supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <backup supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <async-teardown supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <ps2 supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <sev supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <sgx supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <hyperv supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='features'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>relaxed</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vapic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>spinlocks</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vpindex</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>runtime</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>synic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>stimer</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>reset</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vendor_id</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>frequencies</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>reenlightenment</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tlbflush</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>ipi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>avic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>emsr_bitmap</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>xmm_input</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <defaults>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <spinlocks>4095</spinlocks>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <stimer_direct>on</stimer_direct>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <tlbflush_direct>on</tlbflush_direct>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <tlbflush_extended>on</tlbflush_extended>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </defaults>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </hyperv>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <launchSecurity supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='sectype'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tdx</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </launchSecurity>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: </domainCapabilities>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.636 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.641 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  9 05:39:13 np0005551604 nova_compute[188553]: <domainCapabilities>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <path>/usr/libexec/qemu-kvm</path>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <domain>kvm</domain>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <arch>x86_64</arch>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <vcpu max='4096'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <iothreads supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <os supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <enum name='firmware'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>efi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <loader supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>rom</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pflash</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='readonly'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>yes</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>no</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='secure'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>yes</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>no</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </loader>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </os>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <cpu>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='host-passthrough' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='hostPassthroughMigratable'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>on</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>off</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='maximum' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='maximumMigratable'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>on</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>off</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='host-model' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <vendor>AMD</vendor>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='x2apic'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc-deadline'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='hypervisor'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc_adjust'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='spec-ctrl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='stibp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='cmp_legacy'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='overflow-recov'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='succor'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='amd-ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='virt-ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='lbrv'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc-scale'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='vmcb-clean'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='flushbyasid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='pause-filter'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='pfthreshold'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='svme-addr-chk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='disable' name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='custom' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Dhyana-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Genoa'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='auto-ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Genoa-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='auto-ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-128'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-256'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-512'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v6'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v7'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='KnightsMill'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512er'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512pf'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='KnightsMill-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512er'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512pf'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G4-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tbm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G5-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tbm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SierraForest'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cmpccxadd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SierraForest-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cmpccxadd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='athlon'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='athlon-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='core2duo'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='core2duo-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='coreduo'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='coreduo-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='n270'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='n270-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='phenom'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='phenom-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </cpu>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <memoryBacking supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <enum name='sourceType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>file</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>anonymous</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>memfd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </memoryBacking>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <devices>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <disk supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='diskDevice'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>disk</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>cdrom</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>floppy</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>lun</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='bus'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>fdc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>scsi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>sata</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-non-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </disk>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <graphics supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vnc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>egl-headless</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dbus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </graphics>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <video supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='modelType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vga</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>cirrus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>none</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>bochs</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>ramfb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </video>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <hostdev supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='mode'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>subsystem</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='startupPolicy'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>default</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>mandatory</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>requisite</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>optional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='subsysType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pci</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>scsi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='capsType'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='pciBackend'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </hostdev>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <rng supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-non-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>random</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>egd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>builtin</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </rng>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <filesystem supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='driverType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>path</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>handle</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtiofs</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </filesystem>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <tpm supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tpm-tis</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tpm-crb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>emulator</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>external</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendVersion'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>2.0</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </tpm>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <redirdev supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='bus'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </redirdev>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <channel supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pty</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>unix</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </channel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <crypto supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>qemu</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>builtin</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </crypto>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <interface supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>default</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>passt</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </interface>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <panic supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>isa</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>hyperv</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </panic>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <console supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>null</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pty</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dev</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>file</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pipe</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>stdio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>udp</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tcp</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>unix</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>qemu-vdagent</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dbus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </console>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </devices>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <gic supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <vmcoreinfo supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <genid supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <backingStoreInput supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <backup supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <async-teardown supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <ps2 supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <sev supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <sgx supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <hyperv supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='features'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>relaxed</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vapic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>spinlocks</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vpindex</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>runtime</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>synic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>stimer</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>reset</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vendor_id</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>frequencies</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>reenlightenment</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tlbflush</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>ipi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>avic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>emsr_bitmap</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>xmm_input</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <defaults>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <spinlocks>4095</spinlocks>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <stimer_direct>on</stimer_direct>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <tlbflush_direct>on</tlbflush_direct>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <tlbflush_extended>on</tlbflush_extended>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </defaults>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </hyperv>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <launchSecurity supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='sectype'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tdx</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </launchSecurity>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: </domainCapabilities>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.702 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  9 05:39:13 np0005551604 nova_compute[188553]: <domainCapabilities>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <path>/usr/libexec/qemu-kvm</path>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <domain>kvm</domain>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <arch>x86_64</arch>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <vcpu max='240'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <iothreads supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <os supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <enum name='firmware'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <loader supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>rom</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pflash</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='readonly'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>yes</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>no</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='secure'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>no</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </loader>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </os>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <cpu>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='host-passthrough' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='hostPassthroughMigratable'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>on</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>off</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='maximum' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='maximumMigratable'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>on</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>off</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='host-model' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <vendor>AMD</vendor>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='x2apic'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc-deadline'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='hypervisor'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc_adjust'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='spec-ctrl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='stibp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='cmp_legacy'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='overflow-recov'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='succor'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='amd-ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='virt-ssbd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='lbrv'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='tsc-scale'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='vmcb-clean'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='flushbyasid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='pause-filter'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='pfthreshold'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='svme-addr-chk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <feature policy='disable' name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <mode name='custom' supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Broadwell-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cascadelake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Cooperlake-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Denverton-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Dhyana-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Genoa'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='auto-ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Genoa-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='auto-ibrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Milan-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amd-psfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='stibp-always-on'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-Rome-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='EPYC-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='GraniteRapids-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-128'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-256'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx10-512'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='prefetchiti'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Haswell-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-noTSX'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v6'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Icelake-Server-v7'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='IvyBridge-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='KnightsMill'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512er'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512pf'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='KnightsMill-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512er'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512pf'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G4-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tbm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Opteron_G5-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fma4'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tbm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xop'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SapphireRapids-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='amx-tile'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-bf16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-fp16'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bitalg'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrc'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fzrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='la57'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='taa-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xfd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SierraForest'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cmpccxadd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='SierraForest-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ifma'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cmpccxadd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fbsdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='fsrs'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ibrs-all'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mcdt-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pbrsb-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='psdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='serialize'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vaes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Client-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='hle'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='rtm'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Skylake-Server-v5'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512bw'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512cd'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512dq'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512f'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='avx512vl'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='invpcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pcid'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='pku'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='mpx'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v2'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v3'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='core-capability'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='split-lock-detect'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='Snowridge-v4'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='cldemote'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='erms'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='gfni'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdir64b'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='movdiri'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='xsaves'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='athlon'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='athlon-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='core2duo'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='core2duo-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='coreduo'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='coreduo-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='n270'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='n270-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='ss'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='phenom'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <blockers model='phenom-v1'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnow'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <feature name='3dnowext'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </blockers>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </mode>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </cpu>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <memoryBacking supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <enum name='sourceType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>file</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>anonymous</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <value>memfd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </memoryBacking>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <devices>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <disk supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='diskDevice'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>disk</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>cdrom</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>floppy</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>lun</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='bus'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>ide</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>fdc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>scsi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>sata</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-non-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </disk>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <graphics supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vnc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>egl-headless</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dbus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </graphics>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <video supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='modelType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vga</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>cirrus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>none</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>bochs</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>ramfb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </video>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <hostdev supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='mode'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>subsystem</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='startupPolicy'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>default</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>mandatory</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>requisite</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>optional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='subsysType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pci</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>scsi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='capsType'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='pciBackend'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </hostdev>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <rng supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtio-non-transitional</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>random</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>egd</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>builtin</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </rng>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <filesystem supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='driverType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>path</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>handle</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>virtiofs</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </filesystem>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <tpm supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tpm-tis</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tpm-crb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>emulator</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>external</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendVersion'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>2.0</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </tpm>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <redirdev supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='bus'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>usb</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </redirdev>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <channel supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pty</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>unix</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </channel>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <crypto supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>qemu</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendModel'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>builtin</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </crypto>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <interface supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='backendType'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>default</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>passt</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </interface>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <panic supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='model'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>isa</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>hyperv</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </panic>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <console supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='type'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>null</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vc</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pty</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dev</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>file</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>pipe</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>stdio</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>udp</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tcp</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>unix</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>qemu-vdagent</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>dbus</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </console>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </devices>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  <features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <gic supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <vmcoreinfo supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <genid supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <backingStoreInput supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <backup supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <async-teardown supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <ps2 supported='yes'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <sev supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <sgx supported='no'/>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <hyperv supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='features'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>relaxed</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vapic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>spinlocks</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vpindex</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>runtime</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>synic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>stimer</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>reset</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>vendor_id</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>frequencies</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>reenlightenment</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tlbflush</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>ipi</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>avic</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>emsr_bitmap</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>xmm_input</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <defaults>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <spinlocks>4095</spinlocks>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <stimer_direct>on</stimer_direct>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <tlbflush_direct>on</tlbflush_direct>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <tlbflush_extended>on</tlbflush_extended>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </defaults>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </hyperv>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    <launchSecurity supported='yes'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      <enum name='sectype'>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:        <value>tdx</value>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:      </enum>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:    </launchSecurity>
Dec  9 05:39:13 np0005551604 nova_compute[188553]:  </features>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: </domainCapabilities>
Dec  9 05:39:13 np0005551604 nova_compute[188553]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.765 188558 DEBUG nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.766 188558 INFO nova.virt.libvirt.host [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Secure Boot support detected#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.769 188558 INFO nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.769 188558 INFO nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.785 188558 DEBUG nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.827 188558 INFO nova.virt.node [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Determined node identity cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from /var/lib/nova/compute_id#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.856 188558 WARNING nova.compute.manager [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Compute nodes ['cdc1168d-33c9-4d2c-8f23-1b695a68afd0'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.890 188558 INFO nova.compute.manager [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.943 188558 WARNING nova.compute.manager [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.943 188558 DEBUG oslo_concurrency.lockutils [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.944 188558 DEBUG oslo_concurrency.lockutils [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.944 188558 DEBUG oslo_concurrency.lockutils [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:39:13 np0005551604 nova_compute[188553]: 2025-12-09 10:39:13.945 188558 DEBUG nova.compute.resource_tracker [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 05:39:13 np0005551604 systemd[1]: Starting libvirt nodedev daemon...
Dec  9 05:39:14 np0005551604 systemd[1]: Started libvirt nodedev daemon.
Dec  9 05:39:14 np0005551604 python3.9[189408]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:39:14 np0005551604 systemd[1]: Stopping nova_compute container...
Dec  9 05:39:14 np0005551604 nova_compute[188553]: 2025-12-09 10:39:14.281 188558 WARNING nova.virt.libvirt.driver [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 05:39:14 np0005551604 nova_compute[188553]: 2025-12-09 10:39:14.282 188558 DEBUG nova.compute.resource_tracker [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6043MB free_disk=72.40962219238281GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 05:39:14 np0005551604 nova_compute[188553]: 2025-12-09 10:39:14.282 188558 DEBUG oslo_concurrency.lockutils [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:39:14 np0005551604 nova_compute[188553]: 2025-12-09 10:39:14.282 188558 DEBUG oslo_concurrency.lockutils [None req-73704fe7-ec8a-423b-91ec-a3a32cc7fc3d - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:39:14 np0005551604 virtqemud[189118]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  9 05:39:14 np0005551604 virtqemud[189118]: hostname: compute-0
Dec  9 05:39:14 np0005551604 virtqemud[189118]: End of file while reading data: Input/output error
Dec  9 05:39:14 np0005551604 systemd[1]: libpod-2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14.scope: Deactivated successfully.
Dec  9 05:39:14 np0005551604 systemd[1]: libpod-2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14.scope: Consumed 2.819s CPU time.
Dec  9 05:39:14 np0005551604 podman[189435]: 2025-12-09 10:39:14.310188229 +0000 UTC m=+0.077435855 container died 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, managed_by=edpm_ansible)
Dec  9 05:39:14 np0005551604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14-userdata-shm.mount: Deactivated successfully.
Dec  9 05:39:14 np0005551604 systemd[1]: var-lib-containers-storage-overlay-3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e-merged.mount: Deactivated successfully.
Dec  9 05:39:14 np0005551604 podman[189435]: 2025-12-09 10:39:14.361422954 +0000 UTC m=+0.128670580 container cleanup 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec  9 05:39:14 np0005551604 podman[189435]: nova_compute
Dec  9 05:39:14 np0005551604 podman[189464]: nova_compute
Dec  9 05:39:14 np0005551604 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Dec  9 05:39:14 np0005551604 systemd[1]: Stopped nova_compute container.
Dec  9 05:39:14 np0005551604 systemd[1]: Starting nova_compute container...
Dec  9 05:39:14 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:39:14 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:14 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:14 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:14 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:14 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/3b2411f6a5cc06d66ccbb360c079c44fb0b2187d660a7767fed63c82cd30921e/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:14 np0005551604 podman[189477]: 2025-12-09 10:39:14.583169341 +0000 UTC m=+0.123195317 container init 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, container_name=nova_compute, tcib_managed=true, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 05:39:14 np0005551604 podman[189477]: 2025-12-09 10:39:14.590577708 +0000 UTC m=+0.130603664 container start 2cfb1116e5c21d944d07a9fbc165d93c5b9ded72611d7fd6ca79caaed003ea14 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  9 05:39:14 np0005551604 podman[189477]: nova_compute
Dec  9 05:39:14 np0005551604 nova_compute[189493]: + sudo -E kolla_set_configs
Dec  9 05:39:14 np0005551604 systemd[1]: Started nova_compute container.
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Validating config file
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Copying service configuration files
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Deleting /etc/nova/nova.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Deleting /etc/ceph
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Creating directory /etc/ceph
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /etc/ceph
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Writing out command to execute
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Dec  9 05:39:14 np0005551604 nova_compute[189493]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Dec  9 05:39:14 np0005551604 nova_compute[189493]: ++ cat /run_command
Dec  9 05:39:14 np0005551604 nova_compute[189493]: + CMD=nova-compute
Dec  9 05:39:14 np0005551604 nova_compute[189493]: + ARGS=
Dec  9 05:39:14 np0005551604 nova_compute[189493]: + sudo kolla_copy_cacerts
Dec  9 05:39:14 np0005551604 nova_compute[189493]: + [[ ! -n '' ]]
Dec  9 05:39:14 np0005551604 nova_compute[189493]: + . kolla_extend_start
Dec  9 05:39:14 np0005551604 nova_compute[189493]: Running command: 'nova-compute'
Dec  9 05:39:14 np0005551604 nova_compute[189493]: + echo 'Running command: '\''nova-compute'\'''
Dec  9 05:39:14 np0005551604 nova_compute[189493]: + umask 0022
Dec  9 05:39:14 np0005551604 nova_compute[189493]: + exec nova-compute
Dec  9 05:39:15 np0005551604 python3.9[189656]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Dec  9 05:39:15 np0005551604 systemd[1]: Started libpod-conmon-73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e.scope.
Dec  9 05:39:15 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:39:15 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad08c0d724bb2a32e3cc93fc7084ed5878057cc41ef5149a8ec0a2e82512589/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:15 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad08c0d724bb2a32e3cc93fc7084ed5878057cc41ef5149a8ec0a2e82512589/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:15 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/bad08c0d724bb2a32e3cc93fc7084ed5878057cc41ef5149a8ec0a2e82512589/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Dec  9 05:39:15 np0005551604 podman[189679]: 2025-12-09 10:39:15.653538033 +0000 UTC m=+0.142258817 container init 73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec  9 05:39:15 np0005551604 podman[189679]: 2025-12-09 10:39:15.659843319 +0000 UTC m=+0.148564073 container start 73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec  9 05:39:15 np0005551604 python3.9[189656]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Applying nova statedir ownership
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Dec  9 05:39:15 np0005551604 nova_compute_init[189701]: INFO:nova_statedir:Nova statedir ownership complete
Dec  9 05:39:15 np0005551604 systemd[1]: libpod-73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e.scope: Deactivated successfully.
Dec  9 05:39:15 np0005551604 podman[189715]: 2025-12-09 10:39:15.76592386 +0000 UTC m=+0.026799777 container died 73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=nova_compute_init, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible)
Dec  9 05:39:15 np0005551604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e-userdata-shm.mount: Deactivated successfully.
Dec  9 05:39:15 np0005551604 systemd[1]: var-lib-containers-storage-overlay-bad08c0d724bb2a32e3cc93fc7084ed5878057cc41ef5149a8ec0a2e82512589-merged.mount: Deactivated successfully.
Dec  9 05:39:15 np0005551604 podman[189715]: 2025-12-09 10:39:15.800065529 +0000 UTC m=+0.060941366 container cleanup 73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Dec  9 05:39:15 np0005551604 systemd[1]: libpod-conmon-73fb233dfeced3f1cc50bfd7cf4c5ad03632b74b05228ec222572e6ebdf5cf9e.scope: Deactivated successfully.
Dec  9 05:39:16 np0005551604 systemd[1]: session-24.scope: Deactivated successfully.
Dec  9 05:39:16 np0005551604 systemd[1]: session-24.scope: Consumed 2min 3.628s CPU time.
Dec  9 05:39:16 np0005551604 systemd-logind[806]: Session 24 logged out. Waiting for processes to exit.
Dec  9 05:39:16 np0005551604 systemd-logind[806]: Removed session 24.
Dec  9 05:39:16 np0005551604 nova_compute[189493]: 2025-12-09 10:39:16.654 189497 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  9 05:39:16 np0005551604 nova_compute[189493]: 2025-12-09 10:39:16.655 189497 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  9 05:39:16 np0005551604 nova_compute[189493]: 2025-12-09 10:39:16.655 189497 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44#033[00m
Dec  9 05:39:16 np0005551604 nova_compute[189493]: 2025-12-09 10:39:16.655 189497 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs#033[00m
Dec  9 05:39:16 np0005551604 nova_compute[189493]: 2025-12-09 10:39:16.801 189497 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 05:39:16 np0005551604 nova_compute[189493]: 2025-12-09 10:39:16.830 189497 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.029s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 05:39:16 np0005551604 nova_compute[189493]: 2025-12-09 10:39:16.831 189497 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473#033[00m
Dec  9 05:39:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:39:16.965 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:39:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:39:16.966 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:39:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:39:16.966 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.384 189497 INFO nova.virt.driver [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.515 189497 INFO nova.compute.provider_config [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.528 189497 DEBUG oslo_concurrency.lockutils [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.529 189497 DEBUG oslo_concurrency.lockutils [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.529 189497 DEBUG oslo_concurrency.lockutils [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.529 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.529 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.529 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.530 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.531 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.531 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.531 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.531 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.531 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.532 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.533 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.533 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.533 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.533 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.533 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.534 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.534 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.534 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.534 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.535 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.536 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.537 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.538 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.539 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.540 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.541 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.542 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.543 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.544 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.545 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.546 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.547 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.548 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.549 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.550 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.551 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.552 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.553 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.554 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.555 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.556 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.557 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.558 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.559 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.559 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.559 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.559 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.559 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.560 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.561 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.562 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.563 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.564 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.565 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.566 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.567 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.568 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.569 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.570 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.571 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.572 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.573 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.574 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.575 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.576 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.577 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.578 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.579 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.580 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.581 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.582 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.582 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.582 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.582 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.582 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.583 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.584 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.585 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.586 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.587 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.588 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.589 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.590 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.591 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.592 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.593 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.594 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.595 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.596 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.596 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.596 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.596 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.596 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.597 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.598 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.599 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.600 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.601 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.602 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 WARNING oslo_config.cfg [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Dec  9 05:39:17 np0005551604 nova_compute[189493]: live_migration_uri is deprecated for removal in favor of two other options that
Dec  9 05:39:17 np0005551604 nova_compute[189493]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Dec  9 05:39:17 np0005551604 nova_compute[189493]: and ``live_migration_inbound_addr`` respectively.
Dec  9 05:39:17 np0005551604 nova_compute[189493]: ).  Its value may be silently ignored in the future.#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.603 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.604 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.605 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.606 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.607 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.608 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.609 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.610 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.611 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.612 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.613 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.614 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.614 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.614 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.614 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.614 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.615 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.616 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.617 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.618 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.619 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.620 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.621 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.622 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.623 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.624 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.625 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.626 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.627 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.628 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.629 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.630 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.631 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.632 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.633 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.634 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.635 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.636 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.636 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.636 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.636 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.636 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.637 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.638 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.639 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.640 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.641 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.642 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.643 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.644 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.645 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.646 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.647 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.648 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.648 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.648 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.648 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.648 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.649 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.650 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.651 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.652 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.653 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.654 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.655 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.656 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.657 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.658 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.659 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.660 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.661 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.661 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.661 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.661 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.661 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.662 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.663 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.663 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.663 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.663 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.663 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.664 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.664 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.664 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.664 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.664 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.665 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.666 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.667 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.668 189497 DEBUG oslo_service.service [None req-346a195e-8421-4b33-82a5-399d0edfccb2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.668 189497 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.684 189497 INFO nova.virt.node [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Determined node identity cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from /var/lib/nova/compute_id#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.684 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.685 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.685 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.686 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.700 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f623b10de50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.704 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f623b10de50> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.704 189497 INFO nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Connection event '1' reason 'None'#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.712 189497 INFO nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Libvirt host capabilities <capabilities>
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <host>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <uuid>6aaf5123-0bdb-461d-92bb-b40c4bea282b</uuid>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <cpu>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <arch>x86_64</arch>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model>EPYC-Rome-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <vendor>AMD</vendor>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <microcode version='16777317'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <signature family='23' model='49' stepping='0'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <maxphysaddr mode='emulate' bits='40'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='x2apic'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='tsc-deadline'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='osxsave'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='hypervisor'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='tsc_adjust'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='spec-ctrl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='stibp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='arch-capabilities'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='ssbd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='cmp_legacy'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='topoext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='virt-ssbd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='lbrv'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='tsc-scale'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='vmcb-clean'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='pause-filter'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='pfthreshold'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='svme-addr-chk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='rdctl-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='skip-l1dfl-vmentry'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='mds-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature name='pschange-mc-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <pages unit='KiB' size='4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <pages unit='KiB' size='2048'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <pages unit='KiB' size='1048576'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </cpu>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <power_management>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <suspend_mem/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <suspend_disk/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <suspend_hybrid/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </power_management>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <iommu support='no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <migration_features>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <live/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <uri_transports>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <uri_transport>tcp</uri_transport>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <uri_transport>rdma</uri_transport>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </uri_transports>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </migration_features>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <topology>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <cells num='1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <cell id='0'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:          <memory unit='KiB'>7864304</memory>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:          <pages unit='KiB' size='4'>1966076</pages>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:          <pages unit='KiB' size='2048'>0</pages>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:          <pages unit='KiB' size='1048576'>0</pages>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:          <distances>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:            <sibling id='0' value='10'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:          </distances>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:          <cpus num='8'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:            <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:            <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:            <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:            <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:            <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:            <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:            <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:            <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:          </cpus>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        </cell>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </cells>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </topology>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <cache>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </cache>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <secmodel>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model>selinux</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <doi>0</doi>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </secmodel>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <secmodel>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model>dac</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <doi>0</doi>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <baselabel type='kvm'>+107:+107</baselabel>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <baselabel type='qemu'>+107:+107</baselabel>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </secmodel>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </host>
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <guest>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <os_type>hvm</os_type>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <arch name='i686'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <wordsize>32</wordsize>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <domain type='qemu'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <domain type='kvm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </arch>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <features>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <pae/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <nonpae/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <acpi default='on' toggle='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <apic default='on' toggle='no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <cpuselection/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <deviceboot/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <disksnapshot default='on' toggle='no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <externalSnapshot/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </features>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </guest>
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <guest>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <os_type>hvm</os_type>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <arch name='x86_64'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <wordsize>64</wordsize>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <emulator>/usr/libexec/qemu-kvm</emulator>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <domain type='qemu'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <domain type='kvm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </arch>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <features>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <acpi default='on' toggle='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <apic default='on' toggle='no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <cpuselection/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <deviceboot/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <disksnapshot default='on' toggle='no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <externalSnapshot/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </features>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </guest>
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 
Dec  9 05:39:17 np0005551604 nova_compute[189493]: </capabilities>
Dec  9 05:39:17 np0005551604 nova_compute[189493]: #033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.722 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.724 189497 DEBUG nova.virt.libvirt.volume.mount [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.728 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Dec  9 05:39:17 np0005551604 nova_compute[189493]: <domainCapabilities>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <path>/usr/libexec/qemu-kvm</path>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <domain>kvm</domain>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <arch>i686</arch>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <vcpu max='240'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <iothreads supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <os supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <enum name='firmware'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <loader supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>rom</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pflash</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='readonly'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>yes</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>no</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='secure'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>no</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </loader>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </os>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <cpu>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='host-passthrough' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='hostPassthroughMigratable'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>on</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>off</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='maximum' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='maximumMigratable'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>on</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>off</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='host-model' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <vendor>AMD</vendor>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='x2apic'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc-deadline'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='hypervisor'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc_adjust'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='spec-ctrl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='stibp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='ssbd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='cmp_legacy'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='overflow-recov'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='succor'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='ibrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='amd-ssbd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='virt-ssbd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='lbrv'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc-scale'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='vmcb-clean'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='flushbyasid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='pause-filter'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='pfthreshold'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='svme-addr-chk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='disable' name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='custom' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Dhyana-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Genoa'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='auto-ibrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Genoa-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='auto-ibrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10-128'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10-256'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10-512'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v6'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v7'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='KnightsMill'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512er'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512pf'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='KnightsMill-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512er'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512pf'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G4-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tbm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G5-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tbm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SierraForest'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cmpccxadd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SierraForest-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cmpccxadd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='athlon'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='athlon-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='core2duo'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='core2duo-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='coreduo'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='coreduo-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='n270'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='n270-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='phenom'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='phenom-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </cpu>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <memoryBacking supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <enum name='sourceType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>file</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>anonymous</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>memfd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </memoryBacking>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <devices>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <disk supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='diskDevice'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>disk</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>cdrom</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>floppy</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>lun</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='bus'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>ide</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>fdc</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>scsi</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>sata</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-non-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </disk>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <graphics supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vnc</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>egl-headless</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>dbus</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </graphics>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <video supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='modelType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vga</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>cirrus</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>none</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>bochs</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>ramfb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </video>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <hostdev supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='mode'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>subsystem</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='startupPolicy'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>default</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>mandatory</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>requisite</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>optional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='subsysType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pci</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>scsi</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='capsType'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='pciBackend'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </hostdev>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <rng supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-non-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>random</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>egd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>builtin</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </rng>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <filesystem supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='driverType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>path</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>handle</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtiofs</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </filesystem>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <tpm supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tpm-tis</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tpm-crb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>emulator</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>external</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendVersion'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>2.0</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </tpm>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <redirdev supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='bus'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </redirdev>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <channel supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pty</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>unix</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </channel>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <crypto supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>qemu</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>builtin</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </crypto>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <interface supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>default</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>passt</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </interface>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <panic supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>isa</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>hyperv</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </panic>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <console supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>null</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vc</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pty</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>dev</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>file</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pipe</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>stdio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>udp</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tcp</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>unix</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>qemu-vdagent</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>dbus</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </console>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </devices>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <features>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <gic supported='no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <vmcoreinfo supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <genid supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <backingStoreInput supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <backup supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <async-teardown supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <ps2 supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <sev supported='no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <sgx supported='no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <hyperv supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='features'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>relaxed</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vapic</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>spinlocks</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vpindex</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>runtime</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>synic</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>stimer</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>reset</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vendor_id</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>frequencies</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>reenlightenment</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tlbflush</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>ipi</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>avic</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>emsr_bitmap</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>xmm_input</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <defaults>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <spinlocks>4095</spinlocks>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <stimer_direct>on</stimer_direct>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <tlbflush_direct>on</tlbflush_direct>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <tlbflush_extended>on</tlbflush_extended>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </defaults>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </hyperv>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <launchSecurity supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='sectype'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tdx</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </launchSecurity>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </features>
Dec  9 05:39:17 np0005551604 nova_compute[189493]: </domainCapabilities>
Dec  9 05:39:17 np0005551604 nova_compute[189493]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.735 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Dec  9 05:39:17 np0005551604 nova_compute[189493]: <domainCapabilities>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <path>/usr/libexec/qemu-kvm</path>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <domain>kvm</domain>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <arch>i686</arch>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <vcpu max='4096'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <iothreads supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <os supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <enum name='firmware'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <loader supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>rom</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pflash</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='readonly'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>yes</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>no</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='secure'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>no</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </loader>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </os>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <cpu>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='host-passthrough' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='hostPassthroughMigratable'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>on</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>off</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='maximum' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='maximumMigratable'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>on</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>off</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='host-model' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <vendor>AMD</vendor>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='x2apic'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc-deadline'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='hypervisor'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc_adjust'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='spec-ctrl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='stibp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='ssbd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='cmp_legacy'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='overflow-recov'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='succor'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='ibrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='amd-ssbd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='virt-ssbd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='lbrv'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc-scale'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='vmcb-clean'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='flushbyasid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='pause-filter'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='pfthreshold'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='svme-addr-chk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='disable' name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='custom' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Dhyana-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Genoa'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='auto-ibrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Genoa-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='auto-ibrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10-128'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10-256'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10-512'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v6'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v7'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='KnightsMill'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512er'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512pf'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='KnightsMill-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512er'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512pf'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G4-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tbm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G5-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tbm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SierraForest'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cmpccxadd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SierraForest-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cmpccxadd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='athlon'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='athlon-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='core2duo'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='core2duo-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='coreduo'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='coreduo-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='n270'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='n270-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='phenom'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='phenom-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </cpu>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <memoryBacking supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <enum name='sourceType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>file</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>anonymous</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>memfd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </memoryBacking>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <devices>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <disk supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='diskDevice'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>disk</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>cdrom</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>floppy</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>lun</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='bus'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>fdc</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>scsi</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>sata</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-non-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </disk>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <graphics supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vnc</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>egl-headless</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>dbus</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </graphics>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <video supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='modelType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vga</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>cirrus</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>none</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>bochs</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>ramfb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </video>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <hostdev supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='mode'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>subsystem</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='startupPolicy'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>default</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>mandatory</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>requisite</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>optional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='subsysType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pci</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>scsi</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='capsType'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='pciBackend'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </hostdev>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <rng supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-non-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>random</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>egd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>builtin</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </rng>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <filesystem supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='driverType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>path</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>handle</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtiofs</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </filesystem>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <tpm supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tpm-tis</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tpm-crb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>emulator</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>external</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendVersion'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>2.0</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </tpm>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <redirdev supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='bus'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </redirdev>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <channel supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pty</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>unix</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </channel>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <crypto supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>qemu</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>builtin</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </crypto>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <interface supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>default</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>passt</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </interface>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <panic supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>isa</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>hyperv</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </panic>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <console supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>null</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vc</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pty</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>dev</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>file</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pipe</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>stdio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>udp</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tcp</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>unix</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>qemu-vdagent</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>dbus</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </console>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </devices>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <features>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <gic supported='no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <vmcoreinfo supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <genid supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <backingStoreInput supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <backup supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <async-teardown supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <ps2 supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <sev supported='no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <sgx supported='no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <hyperv supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='features'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>relaxed</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vapic</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>spinlocks</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vpindex</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>runtime</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>synic</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>stimer</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>reset</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vendor_id</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>frequencies</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>reenlightenment</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tlbflush</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>ipi</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>avic</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>emsr_bitmap</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>xmm_input</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <defaults>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <spinlocks>4095</spinlocks>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <stimer_direct>on</stimer_direct>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <tlbflush_direct>on</tlbflush_direct>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <tlbflush_extended>on</tlbflush_extended>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </defaults>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </hyperv>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <launchSecurity supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='sectype'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tdx</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </launchSecurity>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </features>
Dec  9 05:39:17 np0005551604 nova_compute[189493]: </domainCapabilities>
Dec  9 05:39:17 np0005551604 nova_compute[189493]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.784 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952#033[00m
Dec  9 05:39:17 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.789 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Dec  9 05:39:17 np0005551604 nova_compute[189493]: <domainCapabilities>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <path>/usr/libexec/qemu-kvm</path>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <domain>kvm</domain>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <machine>pc-q35-rhel9.8.0</machine>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <arch>x86_64</arch>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <vcpu max='4096'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <iothreads supported='yes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <os supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <enum name='firmware'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>efi</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <loader supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>rom</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pflash</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='readonly'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>yes</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>no</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='secure'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>yes</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>no</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </loader>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </os>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <cpu>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='host-passthrough' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='hostPassthroughMigratable'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>on</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>off</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='maximum' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='maximumMigratable'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>on</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>off</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='host-model' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <vendor>AMD</vendor>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='x2apic'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc-deadline'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='hypervisor'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc_adjust'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='spec-ctrl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='stibp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='ssbd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='cmp_legacy'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='overflow-recov'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='succor'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='ibrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='amd-ssbd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='virt-ssbd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='lbrv'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc-scale'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='vmcb-clean'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='flushbyasid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='pause-filter'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='pfthreshold'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='svme-addr-chk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <feature policy='disable' name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <mode name='custom' supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Dhyana-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Genoa'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='auto-ibrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Genoa-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='auto-ibrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='EPYC-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10-128'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10-256'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx10-512'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-noTSX'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v6'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v7'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='KnightsMill'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512er'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512pf'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='KnightsMill-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512er'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512pf'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G4-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tbm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G5-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tbm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SierraForest'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cmpccxadd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='SierraForest-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ifma'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cmpccxadd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v5'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v2'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v3'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v4'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='athlon'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='athlon-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='core2duo'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='core2duo-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='coreduo'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='coreduo-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='n270'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='n270-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='phenom'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <blockers model='phenom-v1'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </cpu>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <memoryBacking supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <enum name='sourceType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>file</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>anonymous</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <value>memfd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  </memoryBacking>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:  <devices>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <disk supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='diskDevice'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>disk</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>cdrom</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>floppy</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>lun</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='bus'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>fdc</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>scsi</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>sata</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-non-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </disk>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <graphics supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vnc</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>egl-headless</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>dbus</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </graphics>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <video supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='modelType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>vga</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>cirrus</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>none</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>bochs</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>ramfb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </video>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <hostdev supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='mode'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>subsystem</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='startupPolicy'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>default</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>mandatory</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>requisite</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>optional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='subsysType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>pci</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>scsi</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='capsType'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='pciBackend'/>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </hostdev>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <rng supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtio-non-transitional</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>random</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>egd</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>builtin</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </rng>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <filesystem supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='driverType'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>path</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>handle</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>virtiofs</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </filesystem>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <tpm supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tpm-tis</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>tpm-crb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>emulator</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>external</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='backendVersion'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>2.0</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </tpm>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <redirdev supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='bus'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    </redirdev>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:    <channel supported='yes'>
Dec  9 05:39:17 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>pty</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>unix</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </channel>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <crypto supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='model'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>qemu</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>builtin</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </crypto>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <interface supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='backendType'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>default</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>passt</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </interface>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <panic supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>isa</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>hyperv</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </panic>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <console supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>null</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>vc</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>pty</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>dev</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>file</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>pipe</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>stdio</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>udp</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>tcp</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>unix</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>qemu-vdagent</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>dbus</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </console>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  </devices>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <features>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <gic supported='no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <vmcoreinfo supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <genid supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <backingStoreInput supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <backup supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <async-teardown supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <ps2 supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <sev supported='no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <sgx supported='no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <hyperv supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='features'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>relaxed</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>vapic</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>spinlocks</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>vpindex</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>runtime</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>synic</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>stimer</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>reset</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>vendor_id</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>frequencies</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>reenlightenment</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>tlbflush</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>ipi</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>avic</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>emsr_bitmap</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>xmm_input</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <defaults>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <spinlocks>4095</spinlocks>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <stimer_direct>on</stimer_direct>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <tlbflush_direct>on</tlbflush_direct>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <tlbflush_extended>on</tlbflush_extended>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </defaults>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </hyperv>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <launchSecurity supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='sectype'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>tdx</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </launchSecurity>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  </features>
Dec  9 05:39:18 np0005551604 nova_compute[189493]: </domainCapabilities>
Dec  9 05:39:18 np0005551604 nova_compute[189493]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.877 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Dec  9 05:39:18 np0005551604 nova_compute[189493]: <domainCapabilities>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <path>/usr/libexec/qemu-kvm</path>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <domain>kvm</domain>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <machine>pc-i440fx-rhel7.6.0</machine>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <arch>x86_64</arch>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <vcpu max='240'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <iothreads supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <os supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <enum name='firmware'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <loader supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>rom</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>pflash</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='readonly'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>yes</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>no</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='secure'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>no</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </loader>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  </os>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <cpu>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <mode name='host-passthrough' supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='hostPassthroughMigratable'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>on</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>off</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <mode name='maximum' supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='maximumMigratable'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>on</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>off</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <mode name='host-model' supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model fallback='forbid'>EPYC-Rome</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <vendor>AMD</vendor>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <maxphysaddr mode='passthrough' limit='40'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='x2apic'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc-deadline'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='hypervisor'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc_adjust'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='spec-ctrl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='stibp'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='ssbd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='cmp_legacy'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='overflow-recov'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='succor'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='ibrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='amd-ssbd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='virt-ssbd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='lbrv'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='tsc-scale'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='vmcb-clean'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='flushbyasid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='pause-filter'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='pfthreshold'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='svme-addr-chk'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='require' name='lfence-always-serializing'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <feature policy='disable' name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <mode name='custom' supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Broadwell'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-IBRS'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-noTSX'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-noTSX-IBRS'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v3'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Broadwell-v4</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Broadwell-v4'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-noTSX'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v3'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v4'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Cascadelake-Server-v5'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Cooperlake-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Denverton'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Denverton-v3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Denverton-v3'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Dhyana-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Genoa'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='auto-ibrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Genoa-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='auto-ibrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Milan-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amd-psfd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='no-nested-data-bp'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='null-sel-clr-base'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='stibp-always-on'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='EPYC-Rome-v3'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='AMD'>EPYC-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-v3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='EPYC-v3'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>EPYC-v4</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='EPYC-v4'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='GraniteRapids-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-fp16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx10'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx10-128'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx10-256'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx10-512'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='prefetchiti'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Haswell'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Haswell-IBRS'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Haswell-noTSX'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Haswell-noTSX-IBRS'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v3'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Haswell-v4</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Haswell-v4'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-noTSX'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v3'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v4'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v5'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v6'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Icelake-Server-v7'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-IBRS'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='IvyBridge-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='KnightsMill'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512er'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512pf'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='KnightsMill-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-4fmaps'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-4vnniw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512er'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512pf'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G4'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G4-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G5'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='tbm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Opteron_G5-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fma4'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='tbm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xop'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='SapphireRapids-v3'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-int8'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='amx-tile'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-bf16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-fp16'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512-vpopcntdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bitalg'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vbmi2'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrc'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fzrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='la57'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='taa-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='tsx-ldtrk'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xfd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='SierraForest'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='cmpccxadd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>SierraForest-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='SierraForest-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-ifma'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-ne-convert'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-vnni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx-vnni-int8'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='bus-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='cmpccxadd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fbsdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='fsrs'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ibrs-all'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='mcdt-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pbrsb-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='psdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='sbdr-ssdp-no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='serialize'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vaes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='vpclmulqdq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-IBRS'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-noTSX-IBRS'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v3'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Client-v4'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-IBRS'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-noTSX-IBRS'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='hle'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='rtm'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v3'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v4'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Skylake-Server-v5'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512bw'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512cd'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512dq'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512f'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='avx512vl'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='invpcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pcid'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='pku'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Snowridge'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='mpx'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v2'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v3'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='core-capability'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='split-lock-detect'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' vendor='Intel'>Snowridge-v4</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='Snowridge-v4'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='cldemote'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='erms'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='gfni'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdir64b'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='movdiri'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='xsaves'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Westmere-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' vendor='Intel'>Westmere-v2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='athlon'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='athlon-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='core2duo'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='core2duo-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='coreduo'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='coreduo-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='n270'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='n270-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='ss'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='phenom'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <blockers model='phenom-v1'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='3dnow'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <feature name='3dnowext'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </blockers>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </mode>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  </cpu>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <memoryBacking supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <enum name='sourceType'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <value>file</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <value>anonymous</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <value>memfd</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  </memoryBacking>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <devices>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <disk supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='diskDevice'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>disk</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>cdrom</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>floppy</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>lun</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='bus'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>ide</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>fdc</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>scsi</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>sata</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>virtio-transitional</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>virtio-non-transitional</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </disk>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <graphics supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>vnc</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>egl-headless</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>dbus</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </graphics>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <video supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='modelType'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>vga</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>cirrus</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>none</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>bochs</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>ramfb</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </video>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <hostdev supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='mode'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>subsystem</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='startupPolicy'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>default</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>mandatory</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>requisite</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>optional</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='subsysType'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>pci</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>scsi</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='capsType'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='pciBackend'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </hostdev>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <rng supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>virtio</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>virtio-transitional</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>virtio-non-transitional</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>random</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>egd</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>builtin</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </rng>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <filesystem supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='driverType'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>path</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>handle</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>virtiofs</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </filesystem>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <tpm supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>tpm-tis</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>tpm-crb</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>emulator</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>external</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='backendVersion'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>2.0</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </tpm>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <redirdev supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='bus'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>usb</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </redirdev>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <channel supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>pty</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>unix</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </channel>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <crypto supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='model'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>qemu</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='backendModel'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>builtin</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </crypto>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <interface supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='backendType'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>default</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>passt</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </interface>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <panic supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='model'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>isa</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>hyperv</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </panic>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <console supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='type'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>null</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>vc</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>pty</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>dev</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>file</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>pipe</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>stdio</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>udp</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>tcp</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>unix</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>qemu-vdagent</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>dbus</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </console>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  </devices>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  <features>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <gic supported='no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <vmcoreinfo supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <genid supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <backingStoreInput supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <backup supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <async-teardown supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <ps2 supported='yes'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <sev supported='no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <sgx supported='no'/>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <hyperv supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='features'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>relaxed</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>vapic</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>spinlocks</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>vpindex</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>runtime</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>synic</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>stimer</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>reset</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>vendor_id</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>frequencies</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>reenlightenment</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>tlbflush</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>ipi</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>avic</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>emsr_bitmap</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>xmm_input</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <defaults>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <spinlocks>4095</spinlocks>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <stimer_direct>on</stimer_direct>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <tlbflush_direct>on</tlbflush_direct>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <tlbflush_extended>on</tlbflush_extended>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <vendor_id>Linux KVM Hv</vendor_id>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </defaults>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </hyperv>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    <launchSecurity supported='yes'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      <enum name='sectype'>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:        <value>tdx</value>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:      </enum>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:    </launchSecurity>
Dec  9 05:39:18 np0005551604 nova_compute[189493]:  </features>
Dec  9 05:39:18 np0005551604 nova_compute[189493]: </domainCapabilities>
Dec  9 05:39:18 np0005551604 nova_compute[189493]: _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.970 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.971 189497 INFO nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Secure Boot support detected#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.973 189497 INFO nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.973 189497 INFO nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:17.981 189497 DEBUG nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.007 189497 INFO nova.virt.node [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Determined node identity cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from /var/lib/nova/compute_id#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.021 189497 WARNING nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Compute nodes ['cdc1168d-33c9-4d2c-8f23-1b695a68afd0'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.047 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.063 189497 WARNING nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.063 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.064 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.064 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.064 189497 DEBUG nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.208 189497 WARNING nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.209 189497 DEBUG nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5995MB free_disk=72.40912628173828GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.209 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.209 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.245 189497 WARNING nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] No compute node record for compute-0.ctlplane.example.com:cdc1168d-33c9-4d2c-8f23-1b695a68afd0: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host cdc1168d-33c9-4d2c-8f23-1b695a68afd0 could not be found.#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.266 189497 INFO nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: cdc1168d-33c9-4d2c-8f23-1b695a68afd0#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.327 189497 DEBUG nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 05:39:18 np0005551604 nova_compute[189493]: 2025-12-09 10:39:18.327 189497 DEBUG nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 05:39:19 np0005551604 nova_compute[189493]: 2025-12-09 10:39:19.583 189497 INFO nova.scheduler.client.report [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [req-e6f2114b-7555-4d84-9400-22ea6556edd7] Created resource provider record via placement API for resource provider with UUID cdc1168d-33c9-4d2c-8f23-1b695a68afd0 and name compute-0.ctlplane.example.com.#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.175 189497 DEBUG nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Dec  9 05:39:20 np0005551604 nova_compute[189493]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.176 189497 INFO nova.virt.libvirt.host [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] kernel doesn't support AMD SEV#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.177 189497 DEBUG nova.compute.provider_tree [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.177 189497 DEBUG nova.virt.libvirt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.237 189497 DEBUG nova.scheduler.client.report [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Updated inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.237 189497 DEBUG nova.compute.provider_tree [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Updating resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.238 189497 DEBUG nova.compute.provider_tree [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.350 189497 DEBUG nova.compute.provider_tree [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Updating resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.380 189497 DEBUG nova.compute.resource_tracker [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.380 189497 DEBUG oslo_concurrency.lockutils [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.381 189497 DEBUG nova.service [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.488 189497 DEBUG nova.service [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199#033[00m
Dec  9 05:39:20 np0005551604 nova_compute[189493]: 2025-12-09 10:39:20.488 189497 DEBUG nova.servicegroup.drivers.db [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44#033[00m
Dec  9 05:39:22 np0005551604 systemd-logind[806]: New session 26 of user zuul.
Dec  9 05:39:22 np0005551604 systemd[1]: Started Session 26 of User zuul.
Dec  9 05:39:22 np0005551604 podman[189794]: 2025-12-09 10:39:22.39272665 +0000 UTC m=+0.064697879 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:39:23 np0005551604 python3.9[189965]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:39:25 np0005551604 python3.9[190121]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:39:25 np0005551604 systemd[1]: Reloading.
Dec  9 05:39:25 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:39:25 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:39:26 np0005551604 python3.9[190306]: ansible-ansible.builtin.service_facts Invoked
Dec  9 05:39:26 np0005551604 network[190323]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  9 05:39:26 np0005551604 network[190324]: 'network-scripts' will be removed from distribution in near future.
Dec  9 05:39:26 np0005551604 network[190325]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  9 05:39:29 np0005551604 podman[190403]: 2025-12-09 10:39:29.659867575 +0000 UTC m=+0.099330542 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Dec  9 05:39:31 np0005551604 python3.9[190625]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:39:32 np0005551604 python3.9[190778]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:32 np0005551604 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 05:39:33 np0005551604 python3.9[190931]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:34 np0005551604 python3.9[191083]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:39:34 np0005551604 python3.9[191235]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  9 05:39:35 np0005551604 python3.9[191387]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:39:35 np0005551604 systemd[1]: Reloading.
Dec  9 05:39:35 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:39:35 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:39:36 np0005551604 python3.9[191575]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:39:37 np0005551604 python3.9[191728]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:39:38 np0005551604 python3.9[191878]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:39:38 np0005551604 python3.9[192030]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:39 np0005551604 podman[192125]: 2025-12-09 10:39:39.601715049 +0000 UTC m=+0.084291434 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec  9 05:39:39 np0005551604 python3.9[192162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276778.4503536-133-140535091597249/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:39:40 np0005551604 python3.9[192323]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Dec  9 05:39:41 np0005551604 python3.9[192475]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  9 05:39:42 np0005551604 python3.9[192628]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Dec  9 05:39:43 np0005551604 python3.9[192786]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Dec  9 05:39:44 np0005551604 python3.9[192944]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:45 np0005551604 python3.9[193065]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276784.0966673-201-230735722321652/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:45 np0005551604 python3.9[193215]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:46 np0005551604 python3.9[193336]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276785.2517629-201-138191552085246/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:46 np0005551604 python3.9[193486]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:47 np0005551604 python3.9[193607]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276786.483762-201-41019246652058/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:48 np0005551604 python3.9[193757]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:39:48 np0005551604 nova_compute[189493]: 2025-12-09 10:39:48.491 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:39:48 np0005551604 nova_compute[189493]: 2025-12-09 10:39:48.531 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:39:49 np0005551604 python3.9[193909]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:39:49 np0005551604 python3.9[194061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:50 np0005551604 python3.9[194182]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276789.3812351-260-76771144190729/.source.json follow=False _original_basename=ceilometer-agent-compute.json.j2 checksum=264d11e8d3809e7ef745878dce7edd46098e25b2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:51 np0005551604 python3.9[194332]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:51 np0005551604 python3.9[194408]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:52 np0005551604 python3.9[194558]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:52 np0005551604 podman[194653]: 2025-12-09 10:39:52.763940296 +0000 UTC m=+0.091099215 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  9 05:39:52 np0005551604 python3.9[194686]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_agent_compute.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276791.6609843-260-127616818774162/.source.json follow=False _original_basename=ceilometer_agent_compute.json.j2 checksum=4096a0f5410f47dcaf8ab19e56a9d8e211effecd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:53 np0005551604 python3.9[194849]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:53 np0005551604 python3.9[194970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276793.0252616-260-59262484753373/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:54 np0005551604 python3.9[195120]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:55 np0005551604 python3.9[195241]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276794.1054826-260-19412778976771/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:55 np0005551604 python3.9[195391]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:56 np0005551604 python3.9[195512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276795.271495-260-192684799483741/.source.json follow=False _original_basename=node_exporter.json.j2 checksum=6e4982940d2bfae88404914dfaf72552f6356d81 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:57 np0005551604 python3.9[195662]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:57 np0005551604 python3.9[195783]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276796.6836395-260-217485946628743/.source.yaml follow=False _original_basename=node_exporter.yaml.j2 checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:58 np0005551604 python3.9[195933]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:58 np0005551604 python3.9[196054]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276797.8660398-260-136152620593911/.source.json follow=False _original_basename=openstack_network_exporter.json.j2 checksum=d474f1e4c3dbd24762592c51cbe5311f0a037273 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:39:59 np0005551604 python3.9[196204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:39:59 np0005551604 podman[196299]: 2025-12-09 10:39:59.867729915 +0000 UTC m=+0.102644105 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller)
Dec  9 05:39:59 np0005551604 python3.9[196335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276799.016886-260-275075943306042/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=2b6bd0891e609bf38a73282f42888052b750bed6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:00 np0005551604 python3.9[196502]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:40:01 np0005551604 python3.9[196623]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276800.1452332-260-54191626509074/.source.json follow=False _original_basename=podman_exporter.json.j2 checksum=e342121a88f67e2bae7ebc05d1e6d350470198a5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:01 np0005551604 python3.9[196773]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:40:02 np0005551604 python3.9[196894]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276801.3243465-260-70705576530291/.source.yaml follow=False _original_basename=podman_exporter.yaml.j2 checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:03 np0005551604 python3.9[197044]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:40:03 np0005551604 python3.9[197120]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/node_exporter.yaml _original_basename=node_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/node_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:04 np0005551604 python3.9[197270]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:40:04 np0005551604 python3.9[197346]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/podman_exporter.yaml _original_basename=podman_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/podman_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:05 np0005551604 python3.9[197496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:40:05 np0005551604 python3.9[197572]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:06 np0005551604 python3.9[197726]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:07 np0005551604 python3.9[197878]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:07 np0005551604 python3.9[198030]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:40:08 np0005551604 python3.9[198182]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:40:08 np0005551604 systemd[1]: Reloading.
Dec  9 05:40:08 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:40:08 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:40:09 np0005551604 systemd[1]: Listening on Podman API Socket.
Dec  9 05:40:09 np0005551604 podman[198344]: 2025-12-09 10:40:09.876583096 +0000 UTC m=+0.070963787 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:40:10 np0005551604 python3.9[198386]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:40:10 np0005551604 python3.9[198512]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276809.531041-482-177504918846861/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:40:11 np0005551604 python3.9[198588]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:40:11 np0005551604 python3.9[198711]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276809.531041-482-177504918846861/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:40:12 np0005551604 python3.9[198863]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=ceilometer_agent_compute.json debug=False
Dec  9 05:40:13 np0005551604 python3.9[199015]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  9 05:40:14 np0005551604 python3[199167]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=ceilometer_agent_compute.json log_base_path=/var/log/containers/stdouts debug=False
Dec  9 05:40:15 np0005551604 podman[199202]: 2025-12-09 10:40:15.006078851 +0000 UTC m=+0.076065706 container create b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  9 05:40:15 np0005551604 podman[199202]: 2025-12-09 10:40:14.957153039 +0000 UTC m=+0.027139924 image pull b1b6d71b432c07886b3bae74df4dc9841d1f26407d5f96d6c1e400b0154d9a3d quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Dec  9 05:40:15 np0005551604 python3[199167]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck compute --label config_id=edpm --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Dec  9 05:40:15 np0005551604 python3.9[199391]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:40:16 np0005551604 python3.9[199545]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.845 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.845 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.845 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.869 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.869 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.869 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.869 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.870 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.870 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.870 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.870 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.870 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.904 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.904 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.905 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:40:16 np0005551604 nova_compute[189493]: 2025-12-09 10:40:16.905 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 05:40:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:40:16.967 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:40:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:40:16.967 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:40:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:40:16.967 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:40:17 np0005551604 nova_compute[189493]: 2025-12-09 10:40:17.072 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 05:40:17 np0005551604 nova_compute[189493]: 2025-12-09 10:40:17.073 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5980MB free_disk=72.40877532958984GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 05:40:17 np0005551604 nova_compute[189493]: 2025-12-09 10:40:17.073 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:40:17 np0005551604 nova_compute[189493]: 2025-12-09 10:40:17.074 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:40:17 np0005551604 nova_compute[189493]: 2025-12-09 10:40:17.171 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 05:40:17 np0005551604 nova_compute[189493]: 2025-12-09 10:40:17.171 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 05:40:17 np0005551604 nova_compute[189493]: 2025-12-09 10:40:17.202 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 05:40:17 np0005551604 nova_compute[189493]: 2025-12-09 10:40:17.219 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 05:40:17 np0005551604 nova_compute[189493]: 2025-12-09 10:40:17.221 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 05:40:17 np0005551604 nova_compute[189493]: 2025-12-09 10:40:17.221 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:40:17 np0005551604 python3.9[199696]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276816.7640862-546-120083506692431/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:18 np0005551604 python3.9[199772]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:40:18 np0005551604 systemd[1]: Reloading.
Dec  9 05:40:18 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:40:18 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:40:19 np0005551604 python3.9[199883]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:40:19 np0005551604 systemd[1]: Reloading.
Dec  9 05:40:19 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:40:19 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:40:19 np0005551604 systemd[1]: Starting ceilometer_agent_compute container...
Dec  9 05:40:20 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:40:20 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:20 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:20 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:20 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:20 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.
Dec  9 05:40:20 np0005551604 podman[199923]: 2025-12-09 10:40:20.090749067 +0000 UTC m=+0.174331971 container init b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: + sudo -E kolla_set_configs
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: sudo: unable to send audit message: Operation not permitted
Dec  9 05:40:20 np0005551604 podman[199923]: 2025-12-09 10:40:20.126407915 +0000 UTC m=+0.209990699 container start b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  9 05:40:20 np0005551604 podman[199923]: ceilometer_agent_compute
Dec  9 05:40:20 np0005551604 systemd[1]: Started ceilometer_agent_compute container.
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Validating config file
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Copying service configuration files
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: INFO:__main__:Writing out command to execute
Dec  9 05:40:20 np0005551604 podman[199945]: 2025-12-09 10:40:20.184056957 +0000 UTC m=+0.046329478 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  9 05:40:20 np0005551604 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-7bee615facdddd8.service: Main process exited, code=exited, status=1/FAILURE
Dec  9 05:40:20 np0005551604 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-7bee615facdddd8.service: Failed with result 'exit-code'.
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: ++ cat /run_command
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: + ARGS=
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: + sudo kolla_copy_cacerts
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: sudo: unable to send audit message: Operation not permitted
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: + [[ ! -n '' ]]
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: + . kolla_extend_start
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: + umask 0022
Dec  9 05:40:20 np0005551604 ceilometer_agent_compute[199938]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  9 05:40:20 np0005551604 python3.9[200122]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:40:21 np0005551604 systemd[1]: Stopping ceilometer_agent_compute container...
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.122 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.122 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.123 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.124 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.125 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.126 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.127 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.128 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.129 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.130 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.131 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.132 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.133 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.134 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.135 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.155 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.156 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.156 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.156 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.156 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.157 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.158 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.159 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.160 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.161 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.162 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.163 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.164 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.165 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.166 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.167 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.169 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.171 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.171 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.263 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:319
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.265 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:323
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.265 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentHeartBeatManager(0) [12]
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.382 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.391 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.392 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.392 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.526 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.527 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.528 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.529 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.530 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.531 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.532 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.533 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.534 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.535 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.536 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.537 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.538 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.539 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.540 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.540 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.540 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.540 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.540 14 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [14]
Dec  9 05:40:21 np0005551604 virtqemud[189118]: End of file while reading data: Input/output error
Dec  9 05:40:21 np0005551604 ceilometer_agent_compute[199938]: 2025-12-09 10:40:21.549 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.12/site-packages/cotyledon/_service_manager.py:335
Dec  9 05:40:21 np0005551604 systemd[1]: libpod-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Deactivated successfully.
Dec  9 05:40:21 np0005551604 systemd[1]: libpod-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Consumed 1.669s CPU time.
Dec  9 05:40:21 np0005551604 podman[200126]: 2025-12-09 10:40:21.761404966 +0000 UTC m=+0.736886137 container died b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251125)
Dec  9 05:40:21 np0005551604 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-7bee615facdddd8.timer: Deactivated successfully.
Dec  9 05:40:21 np0005551604 systemd[1]: Stopped /usr/bin/podman healthcheck run b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.
Dec  9 05:40:21 np0005551604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-userdata-shm.mount: Deactivated successfully.
Dec  9 05:40:21 np0005551604 systemd[1]: var-lib-containers-storage-overlay-d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05-merged.mount: Deactivated successfully.
Dec  9 05:40:21 np0005551604 podman[200126]: 2025-12-09 10:40:21.872855143 +0000 UTC m=+0.848336284 container cleanup b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  9 05:40:21 np0005551604 podman[200126]: ceilometer_agent_compute
Dec  9 05:40:21 np0005551604 podman[200168]: ceilometer_agent_compute
Dec  9 05:40:21 np0005551604 systemd[1]: edpm_ceilometer_agent_compute.service: Deactivated successfully.
Dec  9 05:40:21 np0005551604 systemd[1]: Stopped ceilometer_agent_compute container.
Dec  9 05:40:21 np0005551604 systemd[1]: Starting ceilometer_agent_compute container...
Dec  9 05:40:22 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:40:22 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:22 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:22 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:22 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/d0e767301f9af05630a0fcc687f8cd4c41f9787f32be7be18488c9538f8a8b05/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:22 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.
Dec  9 05:40:22 np0005551604 podman[200182]: 2025-12-09 10:40:22.103652554 +0000 UTC m=+0.123166023 container init b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: + sudo -E kolla_set_configs
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: sudo: unable to send audit message: Operation not permitted
Dec  9 05:40:22 np0005551604 podman[200182]: 2025-12-09 10:40:22.136066359 +0000 UTC m=+0.155579828 container start b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, managed_by=edpm_ansible)
Dec  9 05:40:22 np0005551604 podman[200182]: ceilometer_agent_compute
Dec  9 05:40:22 np0005551604 systemd[1]: Started ceilometer_agent_compute container.
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Validating config file
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Copying service configuration files
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: INFO:__main__:Writing out command to execute
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: ++ cat /run_command
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: + ARGS=
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: + sudo kolla_copy_cacerts
Dec  9 05:40:22 np0005551604 podman[200204]: 2025-12-09 10:40:22.202054881 +0000 UTC m=+0.054605655 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS)
Dec  9 05:40:22 np0005551604 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-4dddc5eb05d7029b.service: Main process exited, code=exited, status=1/FAILURE
Dec  9 05:40:22 np0005551604 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-4dddc5eb05d7029b.service: Failed with result 'exit-code'.
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: sudo: unable to send audit message: Operation not permitted
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: + [[ ! -n '' ]]
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: + . kolla_extend_start
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: + umask 0022
Dec  9 05:40:22 np0005551604 ceilometer_agent_compute[200197]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Dec  9 05:40:22 np0005551604 auditd[700]: Audit daemon rotating log files
Dec  9 05:40:22 np0005551604 python3.9[200380]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:40:22 np0005551604 podman[200381]: 2025-12-09 10:40:22.892003675 +0000 UTC m=+0.056288149 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.065 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.065 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.065 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.065 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.066 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.067 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.068 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.069 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.070 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.071 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.072 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.073 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.074 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.075 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.076 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.077 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.097 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.097 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.098 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.099 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.100 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.101 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.102 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.103 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.104 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.105 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.106 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.107 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.108 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.109 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.110 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.111 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.113 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.116 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.116 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.118 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.124 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.125 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.125 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.245 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.246 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.247 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.248 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.249 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.250 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.251 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.252 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.253 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.254 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.255 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.256 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.257 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.258 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.259 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.262 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.284 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.284 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.285 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.286 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.289 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a759df5f0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:40:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:40:23 np0005551604 python3.9[200534]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276822.3521905-578-248556456944523/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:40:24 np0005551604 python3.9[200686]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=node_exporter.json debug=False
Dec  9 05:40:25 np0005551604 python3.9[200838]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  9 05:40:26 np0005551604 python3[200990]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=node_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  9 05:40:26 np0005551604 podman[201027]: 2025-12-09 10:40:26.389049133 +0000 UTC m=+0.045128307 container create d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=edpm, container_name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 05:40:26 np0005551604 podman[201027]: 2025-12-09 10:40:26.364827883 +0000 UTC m=+0.020907077 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Dec  9 05:40:26 np0005551604 python3[200990]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck node_exporter --label config_id=edpm --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Dec  9 05:40:27 np0005551604 python3.9[201217]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:40:28 np0005551604 python3.9[201371]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:28 np0005551604 python3.9[201522]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276828.1435943-631-264230454308935/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:29 np0005551604 python3.9[201598]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:40:29 np0005551604 systemd[1]: Reloading.
Dec  9 05:40:29 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:40:29 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:40:30 np0005551604 podman[201681]: 2025-12-09 10:40:30.028853667 +0000 UTC m=+0.115117290 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true)
Dec  9 05:40:30 np0005551604 python3.9[201728]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:40:30 np0005551604 systemd[1]: Reloading.
Dec  9 05:40:30 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:40:30 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:40:30 np0005551604 systemd[1]: Starting node_exporter container...
Dec  9 05:40:30 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:40:30 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767b8e46db6891969d0253ca8679fa58a6854abaedf603b724fbeb9ba1179a6/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:30 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767b8e46db6891969d0253ca8679fa58a6854abaedf603b724fbeb9ba1179a6/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:30 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.
Dec  9 05:40:30 np0005551604 podman[201774]: 2025-12-09 10:40:30.771964279 +0000 UTC m=+0.123858230 container init d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.787Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.787Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.787Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.787Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.787Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=arp
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=bcache
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=bonding
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=cpu
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=edac
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=filefd
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=netclass
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.788Z caller=node_exporter.go:117 level=info collector=netdev
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=netstat
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=nfs
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=nvme
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=softnet
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=systemd
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=xfs
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=node_exporter.go:117 level=info collector=zfs
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.789Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  9 05:40:30 np0005551604 node_exporter[201789]: ts=2025-12-09T10:40:30.790Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  9 05:40:30 np0005551604 podman[201774]: 2025-12-09 10:40:30.806435077 +0000 UTC m=+0.158328978 container start d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 05:40:30 np0005551604 podman[201774]: node_exporter
Dec  9 05:40:30 np0005551604 systemd[1]: Started node_exporter container.
Dec  9 05:40:30 np0005551604 podman[201798]: 2025-12-09 10:40:30.871597358 +0000 UTC m=+0.055065298 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 05:40:31 np0005551604 python3.9[201973]: ansible-ansible.builtin.systemd Invoked with name=edpm_node_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:40:31 np0005551604 systemd[1]: Stopping node_exporter container...
Dec  9 05:40:31 np0005551604 systemd[1]: libpod-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope: Deactivated successfully.
Dec  9 05:40:31 np0005551604 podman[201977]: 2025-12-09 10:40:31.699517215 +0000 UTC m=+0.048018830 container died d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 05:40:31 np0005551604 systemd[1]: d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b-292088ee529aa019.timer: Deactivated successfully.
Dec  9 05:40:31 np0005551604 systemd[1]: Stopped /usr/bin/podman healthcheck run d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.
Dec  9 05:40:31 np0005551604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b-userdata-shm.mount: Deactivated successfully.
Dec  9 05:40:31 np0005551604 systemd[1]: var-lib-containers-storage-overlay-b767b8e46db6891969d0253ca8679fa58a6854abaedf603b724fbeb9ba1179a6-merged.mount: Deactivated successfully.
Dec  9 05:40:31 np0005551604 podman[201977]: 2025-12-09 10:40:31.743489193 +0000 UTC m=+0.091990768 container cleanup d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 05:40:31 np0005551604 podman[201977]: node_exporter
Dec  9 05:40:31 np0005551604 systemd[1]: edpm_node_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  9 05:40:31 np0005551604 podman[202006]: node_exporter
Dec  9 05:40:31 np0005551604 systemd[1]: edpm_node_exporter.service: Failed with result 'exit-code'.
Dec  9 05:40:31 np0005551604 systemd[1]: Stopped node_exporter container.
Dec  9 05:40:31 np0005551604 systemd[1]: Starting node_exporter container...
Dec  9 05:40:31 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:40:31 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767b8e46db6891969d0253ca8679fa58a6854abaedf603b724fbeb9ba1179a6/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:31 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/b767b8e46db6891969d0253ca8679fa58a6854abaedf603b724fbeb9ba1179a6/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:31 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.
Dec  9 05:40:31 np0005551604 podman[202019]: 2025-12-09 10:40:31.951552552 +0000 UTC m=+0.107647931 container init d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.965Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.965Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.965Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.966Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.966Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.966Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.966Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=arp
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=bcache
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=bonding
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=btrfs
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=conntrack
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=cpu
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=cpufreq
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=diskstats
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=edac
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=fibrechannel
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=filefd
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=filesystem
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=infiniband
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=ipvs
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=loadavg
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=mdadm
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=meminfo
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=netclass
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=netdev
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=netstat
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=nfs
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=nfsd
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=nvme
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=schedstat
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=sockstat
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=softnet
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=systemd
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=tapestats
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=udp_queues
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=vmstat
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=xfs
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.967Z caller=node_exporter.go:117 level=info collector=zfs
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.968Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Dec  9 05:40:31 np0005551604 node_exporter[202035]: ts=2025-12-09T10:40:31.968Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Dec  9 05:40:31 np0005551604 podman[202019]: 2025-12-09 10:40:31.976208683 +0000 UTC m=+0.132304032 container start d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 05:40:31 np0005551604 podman[202019]: node_exporter
Dec  9 05:40:31 np0005551604 systemd[1]: Started node_exporter container.
Dec  9 05:40:32 np0005551604 podman[202044]: 2025-12-09 10:40:32.101412786 +0000 UTC m=+0.115752526 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 05:40:32 np0005551604 python3.9[202222]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:40:33 np0005551604 python3.9[202345]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276832.200274-663-139541723923369/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:40:34 np0005551604 python3.9[202497]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=podman_exporter.json debug=False
Dec  9 05:40:34 np0005551604 python3.9[202649]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  9 05:40:35 np0005551604 python3[202803]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=podman_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  9 05:40:37 np0005551604 podman[202817]: 2025-12-09 10:40:37.419933361 +0000 UTC m=+1.456480121 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  9 05:40:37 np0005551604 podman[202917]: 2025-12-09 10:40:37.653282359 +0000 UTC m=+0.113007786 container create 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=edpm, container_name=podman_exporter, managed_by=edpm_ansible)
Dec  9 05:40:37 np0005551604 podman[202917]: 2025-12-09 10:40:37.582840438 +0000 UTC m=+0.042565905 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Dec  9 05:40:37 np0005551604 python3[202803]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env OS_ENDPOINT_TYPE=internal --env CONTAINER_HOST=unix:///run/podman/podman.sock --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=edpm --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Dec  9 05:40:38 np0005551604 python3.9[203102]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:40:39 np0005551604 python3.9[203256]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:40 np0005551604 python3.9[203407]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276839.339333-716-134629698888103/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:40 np0005551604 podman[203455]: 2025-12-09 10:40:40.320726962 +0000 UTC m=+0.068662693 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd)
Dec  9 05:40:40 np0005551604 python3.9[203503]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:40:40 np0005551604 systemd[1]: Reloading.
Dec  9 05:40:40 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:40:40 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:40:41 np0005551604 python3.9[203614]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:40:41 np0005551604 systemd[1]: Reloading.
Dec  9 05:40:41 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:40:41 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:40:41 np0005551604 systemd[1]: Starting podman_exporter container...
Dec  9 05:40:42 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:40:42 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab814706a1562d5947476d9c9c0756fbac53d9e81fc06504309d792bac01d5d/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:42 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab814706a1562d5947476d9c9c0756fbac53d9e81fc06504309d792bac01d5d/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:42 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.
Dec  9 05:40:42 np0005551604 podman[203655]: 2025-12-09 10:40:42.093854817 +0000 UTC m=+0.181273646 container init 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 05:40:42 np0005551604 podman[203655]: 2025-12-09 10:40:42.129743381 +0000 UTC m=+0.217162190 container start 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 05:40:42 np0005551604 podman[203655]: podman_exporter
Dec  9 05:40:42 np0005551604 systemd[1]: Started podman_exporter container.
Dec  9 05:40:42 np0005551604 podman_exporter[203671]: ts=2025-12-09T10:40:42.150Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  9 05:40:42 np0005551604 podman_exporter[203671]: ts=2025-12-09T10:40:42.150Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  9 05:40:42 np0005551604 podman_exporter[203671]: ts=2025-12-09T10:40:42.151Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  9 05:40:42 np0005551604 podman_exporter[203671]: ts=2025-12-09T10:40:42.151Z caller=handler.go:105 level=info collector=container
Dec  9 05:40:42 np0005551604 systemd[1]: Starting Podman API Service...
Dec  9 05:40:42 np0005551604 systemd[1]: Started Podman API Service.
Dec  9 05:40:42 np0005551604 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="/usr/bin/podman filtering at log level info"
Dec  9 05:40:42 np0005551604 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="Setting parallel job count to 25"
Dec  9 05:40:42 np0005551604 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="Using sqlite as database backend"
Dec  9 05:40:42 np0005551604 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Dec  9 05:40:42 np0005551604 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="Using systemd socket activation to determine API endpoint"
Dec  9 05:40:42 np0005551604 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Dec  9 05:40:42 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:40:42 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  9 05:40:42 np0005551604 podman[203687]: time="2025-12-09T10:40:42Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 05:40:42 np0005551604 podman[203676]: 2025-12-09 10:40:42.228840657 +0000 UTC m=+0.087448272 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 05:40:42 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:40:42 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19587 "" "Go-http-client/1.1"
Dec  9 05:40:42 np0005551604 podman_exporter[203671]: ts=2025-12-09T10:40:42.233Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  9 05:40:42 np0005551604 podman_exporter[203671]: ts=2025-12-09T10:40:42.234Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  9 05:40:42 np0005551604 podman_exporter[203671]: ts=2025-12-09T10:40:42.235Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  9 05:40:42 np0005551604 systemd[1]: 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9-71f17b6f5c3a2799.service: Main process exited, code=exited, status=1/FAILURE
Dec  9 05:40:42 np0005551604 systemd[1]: 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9-71f17b6f5c3a2799.service: Failed with result 'exit-code'.
Dec  9 05:40:43 np0005551604 python3.9[203867]: ansible-ansible.builtin.systemd Invoked with name=edpm_podman_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:40:43 np0005551604 systemd[1]: Stopping podman_exporter container...
Dec  9 05:40:43 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:40:42 +0000] "GET /v4.9.3/libpod/events?filters=%7B%7D&since=&stream=true&until= HTTP/1.1" 200 1449 "" "Go-http-client/1.1"
Dec  9 05:40:43 np0005551604 systemd[1]: libpod-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope: Deactivated successfully.
Dec  9 05:40:43 np0005551604 podman[203871]: 2025-12-09 10:40:43.733147364 +0000 UTC m=+0.065924869 container died 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 05:40:43 np0005551604 systemd[1]: 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9-71f17b6f5c3a2799.timer: Deactivated successfully.
Dec  9 05:40:43 np0005551604 systemd[1]: Stopped /usr/bin/podman healthcheck run 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.
Dec  9 05:40:43 np0005551604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9-userdata-shm.mount: Deactivated successfully.
Dec  9 05:40:43 np0005551604 systemd[1]: var-lib-containers-storage-overlay-2ab814706a1562d5947476d9c9c0756fbac53d9e81fc06504309d792bac01d5d-merged.mount: Deactivated successfully.
Dec  9 05:40:44 np0005551604 podman[203871]: 2025-12-09 10:40:44.051410633 +0000 UTC m=+0.384188108 container cleanup 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 05:40:44 np0005551604 podman[203871]: podman_exporter
Dec  9 05:40:44 np0005551604 systemd[1]: edpm_podman_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  9 05:40:44 np0005551604 podman[203899]: podman_exporter
Dec  9 05:40:44 np0005551604 systemd[1]: edpm_podman_exporter.service: Failed with result 'exit-code'.
Dec  9 05:40:44 np0005551604 systemd[1]: Stopped podman_exporter container.
Dec  9 05:40:44 np0005551604 systemd[1]: Starting podman_exporter container...
Dec  9 05:40:44 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:40:44 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab814706a1562d5947476d9c9c0756fbac53d9e81fc06504309d792bac01d5d/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:44 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2ab814706a1562d5947476d9c9c0756fbac53d9e81fc06504309d792bac01d5d/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:44 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.
Dec  9 05:40:44 np0005551604 podman[203912]: 2025-12-09 10:40:44.325448593 +0000 UTC m=+0.144123899 container init 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 05:40:44 np0005551604 podman_exporter[203927]: ts=2025-12-09T10:40:44.349Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Dec  9 05:40:44 np0005551604 podman_exporter[203927]: ts=2025-12-09T10:40:44.349Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Dec  9 05:40:44 np0005551604 podman_exporter[203927]: ts=2025-12-09T10:40:44.350Z caller=handler.go:94 level=info msg="enabled collectors"
Dec  9 05:40:44 np0005551604 podman_exporter[203927]: ts=2025-12-09T10:40:44.350Z caller=handler.go:105 level=info collector=container
Dec  9 05:40:44 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:40:44 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Dec  9 05:40:44 np0005551604 podman[203687]: time="2025-12-09T10:40:44Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 05:40:44 np0005551604 podman[203912]: 2025-12-09 10:40:44.362895109 +0000 UTC m=+0.181570455 container start 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 05:40:44 np0005551604 podman[203912]: podman_exporter
Dec  9 05:40:44 np0005551604 systemd[1]: Started podman_exporter container.
Dec  9 05:40:44 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:40:44 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 19589 "" "Go-http-client/1.1"
Dec  9 05:40:44 np0005551604 podman_exporter[203927]: ts=2025-12-09T10:40:44.393Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Dec  9 05:40:44 np0005551604 podman_exporter[203927]: ts=2025-12-09T10:40:44.394Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Dec  9 05:40:44 np0005551604 podman_exporter[203927]: ts=2025-12-09T10:40:44.394Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Dec  9 05:40:44 np0005551604 podman[203938]: 2025-12-09 10:40:44.453623178 +0000 UTC m=+0.075545119 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 05:40:45 np0005551604 python3.9[204115]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:40:45 np0005551604 python3.9[204238]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276844.612743-748-49343517884362/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:40:46 np0005551604 python3.9[204390]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry config_pattern=openstack_network_exporter.json debug=False
Dec  9 05:40:47 np0005551604 python3.9[204542]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  9 05:40:48 np0005551604 python3[204694]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry config_id=edpm config_overrides={} config_patterns=openstack_network_exporter.json log_base_path=/var/log/containers/stdouts debug=False
Dec  9 05:40:51 np0005551604 podman[204706]: 2025-12-09 10:40:51.735600487 +0000 UTC m=+3.366193530 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  9 05:40:51 np0005551604 podman[204804]: 2025-12-09 10:40:51.885410249 +0000 UTC m=+0.061826757 container create 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, vcs-type=git, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter)
Dec  9 05:40:51 np0005551604 podman[204804]: 2025-12-09 10:40:51.851173901 +0000 UTC m=+0.027590419 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  9 05:40:51 np0005551604 python3[204694]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OS_ENDPOINT_TYPE=internal --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=edpm --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Dec  9 05:40:52 np0005551604 podman[204966]: 2025-12-09 10:40:52.689227923 +0000 UTC m=+0.089667982 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4)
Dec  9 05:40:52 np0005551604 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-4dddc5eb05d7029b.service: Main process exited, code=exited, status=1/FAILURE
Dec  9 05:40:52 np0005551604 systemd[1]: b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614-4dddc5eb05d7029b.service: Failed with result 'exit-code'.
Dec  9 05:40:52 np0005551604 python3.9[205013]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:40:53 np0005551604 podman[205139]: 2025-12-09 10:40:53.625296153 +0000 UTC m=+0.082821566 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  9 05:40:53 np0005551604 python3.9[205186]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:54 np0005551604 python3.9[205337]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276853.9346297-801-275817525910645/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:40:55 np0005551604 python3.9[205413]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:40:55 np0005551604 systemd[1]: Reloading.
Dec  9 05:40:55 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:40:55 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:40:56 np0005551604 python3.9[205524]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:40:56 np0005551604 systemd[1]: Reloading.
Dec  9 05:40:56 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:40:56 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:40:56 np0005551604 systemd[1]: Starting openstack_network_exporter container...
Dec  9 05:40:56 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:40:56 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:56 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:56 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:56 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.
Dec  9 05:40:56 np0005551604 podman[205564]: 2025-12-09 10:40:56.733879058 +0000 UTC m=+0.155577140 container init 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm)
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *bridge.Collector
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *coverage.Collector
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *datapath.Collector
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *iface.Collector
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *memory.Collector
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *ovnnorthd.Collector
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *ovn.Collector
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *ovsdbserver.Collector
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *pmd_perf.Collector
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *pmd_rxq.Collector
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: INFO    10:40:56 main.go:48: registering *vswitch.Collector
Dec  9 05:40:56 np0005551604 openstack_network_exporter[205580]: NOTICE  10:40:56 main.go:76: listening on https://:9105/metrics
Dec  9 05:40:56 np0005551604 podman[205564]: 2025-12-09 10:40:56.763840519 +0000 UTC m=+0.185538581 container start 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, distribution-scope=public, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, container_name=openstack_network_exporter, vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  9 05:40:56 np0005551604 podman[205564]: openstack_network_exporter
Dec  9 05:40:56 np0005551604 systemd[1]: Started openstack_network_exporter container.
Dec  9 05:40:56 np0005551604 podman[205585]: 2025-12-09 10:40:56.880150353 +0000 UTC m=+0.099234031 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41)
Dec  9 05:40:57 np0005551604 python3.9[205765]: ansible-ansible.builtin.systemd Invoked with name=edpm_openstack_network_exporter.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:40:57 np0005551604 systemd[1]: Stopping openstack_network_exporter container...
Dec  9 05:40:57 np0005551604 systemd[1]: libpod-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope: Deactivated successfully.
Dec  9 05:40:57 np0005551604 podman[205769]: 2025-12-09 10:40:57.745787214 +0000 UTC m=+0.059817953 container died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, architecture=x86_64, version=9.6, maintainer=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, release=1755695350, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=edpm, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=)
Dec  9 05:40:57 np0005551604 systemd[1]: 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d-4a56f681095a7cee.timer: Deactivated successfully.
Dec  9 05:40:57 np0005551604 systemd[1]: Stopped /usr/bin/podman healthcheck run 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.
Dec  9 05:40:57 np0005551604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d-userdata-shm.mount: Deactivated successfully.
Dec  9 05:40:57 np0005551604 systemd[1]: var-lib-containers-storage-overlay-9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde-merged.mount: Deactivated successfully.
Dec  9 05:40:58 np0005551604 podman[205769]: 2025-12-09 10:40:58.444301822 +0000 UTC m=+0.758332521 container cleanup 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, config_id=edpm, io.buildah.version=1.33.7, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  9 05:40:58 np0005551604 podman[205769]: openstack_network_exporter
Dec  9 05:40:58 np0005551604 systemd[1]: edpm_openstack_network_exporter.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Dec  9 05:40:58 np0005551604 podman[205794]: openstack_network_exporter
Dec  9 05:40:58 np0005551604 systemd[1]: edpm_openstack_network_exporter.service: Failed with result 'exit-code'.
Dec  9 05:40:58 np0005551604 systemd[1]: Stopped openstack_network_exporter container.
Dec  9 05:40:58 np0005551604 systemd[1]: Starting openstack_network_exporter container...
Dec  9 05:40:58 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:40:58 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:58 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:58 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9bda20fe71ffab57c7c429717196f0a3056d88e31094a35e7c8ebd9592a52fde/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Dec  9 05:40:58 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.
Dec  9 05:40:58 np0005551604 podman[205807]: 2025-12-09 10:40:58.696585282 +0000 UTC m=+0.150850821 container init 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *bridge.Collector
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *coverage.Collector
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *datapath.Collector
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *iface.Collector
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *memory.Collector
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *ovnnorthd.Collector
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *ovn.Collector
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *ovsdbserver.Collector
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *pmd_perf.Collector
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *pmd_rxq.Collector
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: INFO    10:40:58 main.go:48: registering *vswitch.Collector
Dec  9 05:40:58 np0005551604 openstack_network_exporter[205823]: NOTICE  10:40:58 main.go:76: listening on https://:9105/metrics
Dec  9 05:40:58 np0005551604 podman[205807]: 2025-12-09 10:40:58.72747377 +0000 UTC m=+0.181739309 container start 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., version=9.6, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, config_id=edpm)
Dec  9 05:40:58 np0005551604 podman[205807]: openstack_network_exporter
Dec  9 05:40:58 np0005551604 systemd[1]: Started openstack_network_exporter container.
Dec  9 05:40:58 np0005551604 podman[205833]: 2025-12-09 10:40:58.814632503 +0000 UTC m=+0.074884531 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, name=ubi9-minimal, release=1755695350, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, version=9.6)
Dec  9 05:40:59 np0005551604 python3.9[206005]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  9 05:41:00 np0005551604 podman[206129]: 2025-12-09 10:41:00.460250962 +0000 UTC m=+0.119904432 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Dec  9 05:41:00 np0005551604 python3.9[206175]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  9 05:41:01 np0005551604 python3.9[206346]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:02 np0005551604 systemd[1]: Started libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope.
Dec  9 05:41:02 np0005551604 podman[206347]: 2025-12-09 10:41:02.064464268 +0000 UTC m=+0.122251306 container exec e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  9 05:41:02 np0005551604 podman[206347]: 2025-12-09 10:41:02.096074404 +0000 UTC m=+0.153861442 container exec_died e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  9 05:41:02 np0005551604 systemd[1]: libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope: Deactivated successfully.
Dec  9 05:41:02 np0005551604 podman[206379]: 2025-12-09 10:41:02.23718775 +0000 UTC m=+0.066714530 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 05:41:02 np0005551604 python3.9[206552]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:02 np0005551604 systemd[1]: Started libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope.
Dec  9 05:41:02 np0005551604 podman[206553]: 2025-12-09 10:41:02.958419266 +0000 UTC m=+0.089871728 container exec e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  9 05:41:03 np0005551604 podman[206572]: 2025-12-09 10:41:03.021002562 +0000 UTC m=+0.047646973 container exec_died e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, tcib_managed=true)
Dec  9 05:41:03 np0005551604 podman[206553]: 2025-12-09 10:41:03.026854031 +0000 UTC m=+0.158306473 container exec_died e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  9 05:41:03 np0005551604 systemd[1]: libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope: Deactivated successfully.
Dec  9 05:41:04 np0005551604 python3.9[206738]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:04 np0005551604 python3.9[206890]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec  9 05:41:05 np0005551604 python3.9[207055]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:05 np0005551604 systemd[1]: Started libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope.
Dec  9 05:41:05 np0005551604 podman[207056]: 2025-12-09 10:41:05.914029442 +0000 UTC m=+0.083739731 container exec 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  9 05:41:05 np0005551604 podman[207056]: 2025-12-09 10:41:05.945006042 +0000 UTC m=+0.114716371 container exec_died 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec  9 05:41:05 np0005551604 systemd[1]: libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope: Deactivated successfully.
Dec  9 05:41:06 np0005551604 python3.9[207237]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:06 np0005551604 systemd[1]: Started libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope.
Dec  9 05:41:06 np0005551604 podman[207238]: 2025-12-09 10:41:06.788067061 +0000 UTC m=+0.095929893 container exec 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  9 05:41:06 np0005551604 podman[207238]: 2025-12-09 10:41:06.822207626 +0000 UTC m=+0.130070448 container exec_died 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  9 05:41:06 np0005551604 systemd[1]: libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope: Deactivated successfully.
Dec  9 05:41:07 np0005551604 python3.9[207421]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:08 np0005551604 python3.9[207573]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec  9 05:41:09 np0005551604 python3.9[207738]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:09 np0005551604 systemd[1]: Started libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope.
Dec  9 05:41:09 np0005551604 podman[207739]: 2025-12-09 10:41:09.273376216 +0000 UTC m=+0.082354573 container exec 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:41:09 np0005551604 podman[207739]: 2025-12-09 10:41:09.304221332 +0000 UTC m=+0.113199679 container exec_died 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  9 05:41:09 np0005551604 systemd[1]: libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope: Deactivated successfully.
Dec  9 05:41:10 np0005551604 python3.9[207923]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:10 np0005551604 systemd[1]: Started libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope.
Dec  9 05:41:10 np0005551604 podman[207924]: 2025-12-09 10:41:10.169402101 +0000 UTC m=+0.092850919 container exec 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=multipathd, io.buildah.version=1.41.3, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  9 05:41:10 np0005551604 podman[207924]: 2025-12-09 10:41:10.203185217 +0000 UTC m=+0.126633995 container exec_died 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  9 05:41:10 np0005551604 systemd[1]: libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope: Deactivated successfully.
Dec  9 05:41:10 np0005551604 podman[208077]: 2025-12-09 10:41:10.781724532 +0000 UTC m=+0.063607155 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 05:41:10 np0005551604 python3.9[208124]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:11 np0005551604 python3.9[208277]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  9 05:41:12 np0005551604 python3.9[208443]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:12 np0005551604 systemd[1]: Started libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope.
Dec  9 05:41:12 np0005551604 podman[208444]: 2025-12-09 10:41:12.704818164 +0000 UTC m=+0.096585670 container exec b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true)
Dec  9 05:41:12 np0005551604 podman[208444]: 2025-12-09 10:41:12.742367462 +0000 UTC m=+0.134134878 container exec_died b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 05:41:12 np0005551604 systemd[1]: libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Deactivated successfully.
Dec  9 05:41:13 np0005551604 python3.9[208628]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:13 np0005551604 systemd[1]: Started libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope.
Dec  9 05:41:13 np0005551604 podman[208629]: 2025-12-09 10:41:13.652242382 +0000 UTC m=+0.078518210 container exec b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec  9 05:41:13 np0005551604 podman[208629]: 2025-12-09 10:41:13.68755474 +0000 UTC m=+0.113830598 container exec_died b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  9 05:41:13 np0005551604 systemd[1]: libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Deactivated successfully.
Dec  9 05:41:14 np0005551604 python3.9[208814]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:14 np0005551604 podman[208914]: 2025-12-09 10:41:14.932931966 +0000 UTC m=+0.084121282 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 05:41:15 np0005551604 python3.9[208990]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  9 05:41:16 np0005551604 python3.9[209155]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:16 np0005551604 systemd[1]: Started libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope.
Dec  9 05:41:16 np0005551604 podman[209156]: 2025-12-09 10:41:16.272050134 +0000 UTC m=+0.091853172 container exec d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 05:41:16 np0005551604 podman[209156]: 2025-12-09 10:41:16.305308826 +0000 UTC m=+0.125111844 container exec_died d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 05:41:16 np0005551604 systemd[1]: libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope: Deactivated successfully.
Dec  9 05:41:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:41:16.968 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:41:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:41:16.969 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:41:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:41:16.969 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:41:17 np0005551604 python3.9[209339]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:17 np0005551604 systemd[1]: Started libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope.
Dec  9 05:41:17 np0005551604 podman[209340]: 2025-12-09 10:41:17.111004461 +0000 UTC m=+0.080016851 container exec d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 05:41:17 np0005551604 podman[209340]: 2025-12-09 10:41:17.145086115 +0000 UTC m=+0.114098425 container exec_died d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 05:41:17 np0005551604 systemd[1]: libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope: Deactivated successfully.
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.213 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.214 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.242 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.242 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.243 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.243 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.243 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.243 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.874 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.875 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.876 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:41:17 np0005551604 nova_compute[189493]: 2025-12-09 10:41:17.877 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 05:41:17 np0005551604 python3.9[209523]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:18 np0005551604 nova_compute[189493]: 2025-12-09 10:41:18.058 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 05:41:18 np0005551604 nova_compute[189493]: 2025-12-09 10:41:18.060 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5837MB free_disk=72.23798370361328GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 05:41:18 np0005551604 nova_compute[189493]: 2025-12-09 10:41:18.060 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:41:18 np0005551604 nova_compute[189493]: 2025-12-09 10:41:18.061 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:41:18 np0005551604 nova_compute[189493]: 2025-12-09 10:41:18.325 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 05:41:18 np0005551604 nova_compute[189493]: 2025-12-09 10:41:18.325 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 05:41:18 np0005551604 nova_compute[189493]: 2025-12-09 10:41:18.348 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 05:41:18 np0005551604 nova_compute[189493]: 2025-12-09 10:41:18.361 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 05:41:18 np0005551604 nova_compute[189493]: 2025-12-09 10:41:18.363 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 05:41:18 np0005551604 nova_compute[189493]: 2025-12-09 10:41:18.363 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.302s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:41:18 np0005551604 python3.9[209675]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  9 05:41:19 np0005551604 nova_compute[189493]: 2025-12-09 10:41:19.361 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:41:19 np0005551604 nova_compute[189493]: 2025-12-09 10:41:19.361 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 05:41:19 np0005551604 nova_compute[189493]: 2025-12-09 10:41:19.361 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 05:41:19 np0005551604 nova_compute[189493]: 2025-12-09 10:41:19.385 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 05:41:19 np0005551604 nova_compute[189493]: 2025-12-09 10:41:19.386 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:41:20 np0005551604 python3.9[209840]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:20 np0005551604 systemd[1]: Started libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope.
Dec  9 05:41:20 np0005551604 podman[209841]: 2025-12-09 10:41:20.141794916 +0000 UTC m=+0.102062269 container exec 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 05:41:20 np0005551604 podman[209841]: 2025-12-09 10:41:20.172737445 +0000 UTC m=+0.133004738 container exec_died 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 05:41:20 np0005551604 systemd[1]: libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope: Deactivated successfully.
Dec  9 05:41:20 np0005551604 python3.9[210023]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:21 np0005551604 systemd[1]: Started libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope.
Dec  9 05:41:21 np0005551604 podman[210024]: 2025-12-09 10:41:21.095874224 +0000 UTC m=+0.085271223 container exec 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 05:41:21 np0005551604 podman[210024]: 2025-12-09 10:41:21.126587846 +0000 UTC m=+0.115984795 container exec_died 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 05:41:21 np0005551604 systemd[1]: libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope: Deactivated successfully.
Dec  9 05:41:21 np0005551604 python3.9[210207]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:22 np0005551604 python3.9[210359]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  9 05:41:22 np0005551604 podman[210397]: 2025-12-09 10:41:22.92879498 +0000 UTC m=+0.085774006 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  9 05:41:23 np0005551604 python3.9[210544]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:23 np0005551604 systemd[1]: Started libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope.
Dec  9 05:41:23 np0005551604 podman[210545]: 2025-12-09 10:41:23.587542792 +0000 UTC m=+0.086087856 container exec 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, io.buildah.version=1.33.7, release=1755695350, vendor=Red Hat, Inc., config_id=edpm, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container)
Dec  9 05:41:23 np0005551604 podman[210545]: 2025-12-09 10:41:23.598043386 +0000 UTC m=+0.096588430 container exec_died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter)
Dec  9 05:41:23 np0005551604 systemd[1]: libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope: Deactivated successfully.
Dec  9 05:41:23 np0005551604 podman[210574]: 2025-12-09 10:41:23.741870216 +0000 UTC m=+0.066589937 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec  9 05:41:24 np0005551604 python3.9[210743]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:41:24 np0005551604 systemd[1]: Started libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope.
Dec  9 05:41:24 np0005551604 podman[210744]: 2025-12-09 10:41:24.482742533 +0000 UTC m=+0.077409969 container exec 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  9 05:41:24 np0005551604 podman[210744]: 2025-12-09 10:41:24.516116468 +0000 UTC m=+0.110783884 container exec_died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git, distribution-scope=public, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  9 05:41:24 np0005551604 systemd[1]: libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope: Deactivated successfully.
Dec  9 05:41:25 np0005551604 python3.9[210926]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:25 np0005551604 python3.9[211078]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:26 np0005551604 python3.9[211230]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:41:27 np0005551604 python3.9[211353]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276886.156458-1082-230601162739379/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:28 np0005551604 python3.9[211505]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:29 np0005551604 python3.9[211657]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:41:29 np0005551604 podman[211707]: 2025-12-09 10:41:29.387109948 +0000 UTC m=+0.111959347 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, distribution-scope=public, version=9.6, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 05:41:29 np0005551604 python3.9[211754]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:30 np0005551604 python3.9[211908]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:41:30 np0005551604 podman[211958]: 2025-12-09 10:41:30.657690497 +0000 UTC m=+0.096954659 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:41:30 np0005551604 python3.9[212001]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.545i0732 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:31 np0005551604 python3.9[212161]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:41:32 np0005551604 python3.9[212239]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:32 np0005551604 podman[212363]: 2025-12-09 10:41:32.679207707 +0000 UTC m=+0.080204625 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 05:41:32 np0005551604 python3.9[212404]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:41:33 np0005551604 python3[212568]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  9 05:41:34 np0005551604 python3.9[212720]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:41:35 np0005551604 python3.9[212798]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:36 np0005551604 python3.9[212950]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:41:36 np0005551604 python3.9[213028]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:37 np0005551604 python3.9[213180]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:41:37 np0005551604 python3.9[213258]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:38 np0005551604 python3.9[213410]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:41:39 np0005551604 python3.9[213488]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:39 np0005551604 python3.9[213640]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:41:40 np0005551604 python3.9[213765]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765276899.2846398-1207-97465057705155/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:40 np0005551604 podman[213865]: 2025-12-09 10:41:40.932137827 +0000 UTC m=+0.076773746 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:41:41 np0005551604 python3.9[213937]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:41 np0005551604 python3.9[214089]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:41:43 np0005551604 python3.9[214244]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:43 np0005551604 python3.9[214396]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:41:44 np0005551604 python3.9[214549]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:41:45 np0005551604 podman[214675]: 2025-12-09 10:41:45.168279528 +0000 UTC m=+0.065772136 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 05:41:45 np0005551604 python3.9[214720]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:41:46 np0005551604 python3.9[214882]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:41:46 np0005551604 systemd[1]: session-26.scope: Deactivated successfully.
Dec  9 05:41:46 np0005551604 systemd[1]: session-26.scope: Consumed 1min 46.429s CPU time.
Dec  9 05:41:46 np0005551604 systemd-logind[806]: Session 26 logged out. Waiting for processes to exit.
Dec  9 05:41:46 np0005551604 systemd-logind[806]: Removed session 26.
Dec  9 05:41:51 np0005551604 systemd-logind[806]: New session 27 of user zuul.
Dec  9 05:41:51 np0005551604 systemd[1]: Started Session 27 of User zuul.
Dec  9 05:41:52 np0005551604 python3.9[215064]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:41:53 np0005551604 systemd[1]: Reloading.
Dec  9 05:41:53 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:41:53 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:41:53 np0005551604 podman[215066]: 2025-12-09 10:41:53.171216167 +0000 UTC m=+0.104595501 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251125)
Dec  9 05:41:53 np0005551604 podman[215220]: 2025-12-09 10:41:53.937863696 +0000 UTC m=+0.083948131 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec  9 05:41:54 np0005551604 python3.9[215289]: ansible-ansible.builtin.service_facts Invoked
Dec  9 05:41:54 np0005551604 network[215306]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Dec  9 05:41:54 np0005551604 network[215307]: 'network-scripts' will be removed from distribution in near future.
Dec  9 05:41:54 np0005551604 network[215308]: It is advised to switch to 'NetworkManager' instead for network management.
Dec  9 05:41:59 np0005551604 python3.9[215581]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:41:59 np0005551604 podman[203687]: time="2025-12-09T10:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 05:41:59 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22542 "" "Go-http-client/1.1"
Dec  9 05:41:59 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3415 "" "Go-http-client/1.1"
Dec  9 05:41:59 np0005551604 podman[215583]: 2025-12-09 10:41:59.838828356 +0000 UTC m=+0.102172405 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, release=1755695350, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  9 05:42:00 np0005551604 python3.9[215756]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:00 np0005551604 podman[215781]: 2025-12-09 10:42:00.993066099 +0000 UTC m=+0.136827047 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Dec  9 05:42:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:42:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 05:42:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 05:42:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:42:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 05:42:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 05:42:01 np0005551604 openstack_network_exporter[205823]: 
Dec  9 05:42:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 05:42:01 np0005551604 openstack_network_exporter[205823]: 
Dec  9 05:42:01 np0005551604 python3.9[215932]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:02 np0005551604 python3.9[216089]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then#012  systemctl disable --now certmonger.service#012  test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service#012fi#012 _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:42:02 np0005551604 podman[216168]: 2025-12-09 10:42:02.927301364 +0000 UTC m=+0.084252570 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 05:42:03 np0005551604 python3.9[216267]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  9 05:42:04 np0005551604 python3.9[216419]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:42:04 np0005551604 systemd[1]: Reloading.
Dec  9 05:42:04 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:42:04 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:42:05 np0005551604 python3.9[216606]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:42:06 np0005551604 python3.9[216759]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/config/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:42:07 np0005551604 python3.9[216909]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:42:07 np0005551604 python3.9[217061]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:08 np0005551604 python3.9[217182]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276927.3273456-125-50241194790973/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:42:09 np0005551604 python3.9[217334]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Dec  9 05:42:11 np0005551604 python3.9[217485]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:11 np0005551604 podman[217580]: 2025-12-09 10:42:11.522485455 +0000 UTC m=+0.093274354 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  9 05:42:11 np0005551604 python3.9[217619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276930.5951848-171-117377740823151/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:12 np0005551604 python3.9[217776]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:13 np0005551604 python3.9[217897]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276931.8564055-171-256552480878676/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:13 np0005551604 python3.9[218047]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:14 np0005551604 python3.9[218168]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765276933.267991-171-38003233938399/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:15 np0005551604 python3.9[218318]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:42:15 np0005551604 podman[218444]: 2025-12-09 10:42:15.772424912 +0000 UTC m=+0.071131873 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 05:42:15 np0005551604 python3.9[218475]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:42:16 np0005551604 python3.9[218646]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:16 np0005551604 nova_compute[189493]: 2025-12-09 10:42:16.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:42:16 np0005551604 nova_compute[189493]: 2025-12-09 10:42:16.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:42:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:42:16.969 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:42:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:42:16.970 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:42:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:42:16.970 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:42:17 np0005551604 python3.9[218767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276936.1018965-230-116022038220383/.source.json follow=False _original_basename=ceilometer-agent-ipmi.json.j2 checksum=21255e7f7db3155b4a491729298d9407fe6f8335 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:17 np0005551604 nova_compute[189493]: 2025-12-09 10:42:17.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:42:17 np0005551604 nova_compute[189493]: 2025-12-09 10:42:17.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:42:17 np0005551604 nova_compute[189493]: 2025-12-09 10:42:17.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:42:17 np0005551604 nova_compute[189493]: 2025-12-09 10:42:17.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 05:42:18 np0005551604 python3.9[218917]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:18 np0005551604 python3.9[218993]: ansible-ansible.legacy.file Invoked with mode=420 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf _original_basename=ceilometer-host-specific.conf.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-host-specific.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:18 np0005551604 nova_compute[189493]: 2025-12-09 10:42:18.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:42:18 np0005551604 nova_compute[189493]: 2025-12-09 10:42:18.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 05:42:18 np0005551604 nova_compute[189493]: 2025-12-09 10:42:18.844 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 05:42:18 np0005551604 nova_compute[189493]: 2025-12-09 10:42:18.928 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 05:42:18 np0005551604 nova_compute[189493]: 2025-12-09 10:42:18.929 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:42:19 np0005551604 python3.9[219143]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:19 np0005551604 nova_compute[189493]: 2025-12-09 10:42:19.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:42:19 np0005551604 nova_compute[189493]: 2025-12-09 10:42:19.870 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:42:19 np0005551604 nova_compute[189493]: 2025-12-09 10:42:19.870 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:42:19 np0005551604 nova_compute[189493]: 2025-12-09 10:42:19.871 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:42:19 np0005551604 nova_compute[189493]: 2025-12-09 10:42:19.871 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 05:42:19 np0005551604 python3.9[219264]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_agent_ipmi.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276938.6147618-230-177150612178686/.source.json follow=False _original_basename=ceilometer_agent_ipmi.json.j2 checksum=cf81874b7544c057599ec397442879f74d42b3ec backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:20 np0005551604 nova_compute[189493]: 2025-12-09 10:42:20.082 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 05:42:20 np0005551604 nova_compute[189493]: 2025-12-09 10:42:20.083 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5824MB free_disk=72.23726272583008GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 05:42:20 np0005551604 nova_compute[189493]: 2025-12-09 10:42:20.083 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:42:20 np0005551604 nova_compute[189493]: 2025-12-09 10:42:20.083 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:42:20 np0005551604 nova_compute[189493]: 2025-12-09 10:42:20.153 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 05:42:20 np0005551604 nova_compute[189493]: 2025-12-09 10:42:20.154 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 05:42:20 np0005551604 nova_compute[189493]: 2025-12-09 10:42:20.180 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 05:42:20 np0005551604 nova_compute[189493]: 2025-12-09 10:42:20.200 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 05:42:20 np0005551604 nova_compute[189493]: 2025-12-09 10:42:20.201 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 05:42:20 np0005551604 nova_compute[189493]: 2025-12-09 10:42:20.202 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.118s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:42:20 np0005551604 python3.9[219414]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:21 np0005551604 nova_compute[189493]: 2025-12-09 10:42:21.202 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:42:21 np0005551604 python3.9[219535]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276940.066474-230-3015603195563/.source.yaml follow=False _original_basename=ceilometer_prom_exporter.yaml.j2 checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:22 np0005551604 python3.9[219685]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:22 np0005551604 python3.9[219806]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/firewall.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276941.4666798-230-102056999529110/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.285 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.286 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.286 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a74881280>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.303 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.304 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 ceilometer_agent_compute[200197]: 2025-12-09 10:42:23.305 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 05:42:23 np0005551604 python3.9[219956]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:23 np0005551604 podman[220052]: 2025-12-09 10:42:23.87301562 +0000 UTC m=+0.086820389 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute)
Dec  9 05:42:24 np0005551604 python3.9[220088]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/telemetry-power-monitoring/kepler.json mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1765276942.86059-230-87382650480819/.source.json follow=False _original_basename=kepler.json.j2 checksum=89451093c8765edd3915016a9e87770fe489178d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:24 np0005551604 podman[220096]: 2025-12-09 10:42:24.134515822 +0000 UTC m=+0.050849742 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  9 05:42:24 np0005551604 python3.9[220263]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:25 np0005551604 python3.9[220339]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml _original_basename=ceilometer_prom_exporter.yaml.j2 recurse=False state=file path=/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:26 np0005551604 python3.9[220491]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:26 np0005551604 python3.9[220643]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:27 np0005551604 python3.9[220795]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:42:28 np0005551604 python3.9[220947]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:29 np0005551604 python3.9[221070]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276947.892729-349-1279621786950/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:42:29 np0005551604 podman[203687]: time="2025-12-09T10:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 05:42:29 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 22542 "" "Go-http-client/1.1"
Dec  9 05:42:29 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3420 "" "Go-http-client/1.1"
Dec  9 05:42:29 np0005551604 python3.9[221146]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:30 np0005551604 podman[221241]: 2025-12-09 10:42:30.192603138 +0000 UTC m=+0.061281045 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  9 05:42:30 np0005551604 python3.9[221290]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276947.892729-349-1279621786950/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:42:31 np0005551604 python3.9[221442]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:42:31 np0005551604 openstack_network_exporter[205823]: ERROR   10:42:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 05:42:31 np0005551604 openstack_network_exporter[205823]: ERROR   10:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 05:42:31 np0005551604 openstack_network_exporter[205823]: ERROR   10:42:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 05:42:31 np0005551604 openstack_network_exporter[205823]: ERROR   10:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 05:42:31 np0005551604 openstack_network_exporter[205823]: 
Dec  9 05:42:31 np0005551604 openstack_network_exporter[205823]: ERROR   10:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 05:42:31 np0005551604 openstack_network_exporter[205823]: 
Dec  9 05:42:31 np0005551604 podman[221537]: 2025-12-09 10:42:31.51472394 +0000 UTC m=+0.095035201 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  9 05:42:31 np0005551604 python3.9[221586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1765276950.603975-349-192118610981644/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Dec  9 05:42:32 np0005551604 python3.9[221744]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=ceilometer_agent_ipmi.json debug=False
Dec  9 05:42:33 np0005551604 podman[221870]: 2025-12-09 10:42:33.587675131 +0000 UTC m=+0.050415710 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 05:42:33 np0005551604 python3.9[221922]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  9 05:42:34 np0005551604 python3[222074]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=ceilometer_agent_ipmi.json log_base_path=/var/log/containers/stdouts debug=False
Dec  9 05:42:35 np0005551604 podman[222110]: 2025-12-09 10:42:35.100998215 +0000 UTC m=+0.055504967 container create ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.license=GPLv2)
Dec  9 05:42:35 np0005551604 podman[222110]: 2025-12-09 10:42:35.076158342 +0000 UTC m=+0.030665124 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Dec  9 05:42:35 np0005551604 python3[222074]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --healthcheck-command /openstack/healthcheck ipmi --label config_id=edpm --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Dec  9 05:42:35 np0005551604 python3.9[222301]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:42:37 np0005551604 python3.9[222455]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:37 np0005551604 python3.9[222606]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276957.1582315-427-230451822558060/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:38 np0005551604 python3.9[222682]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:42:38 np0005551604 systemd[1]: Reloading.
Dec  9 05:42:38 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:42:38 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:42:39 np0005551604 python3.9[222793]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:42:39 np0005551604 systemd[1]: Reloading.
Dec  9 05:42:39 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:42:39 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:42:40 np0005551604 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  9 05:42:40 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:42:40 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  9 05:42:40 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  9 05:42:40 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  9 05:42:40 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  9 05:42:40 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.
Dec  9 05:42:40 np0005551604 podman[222833]: 2025-12-09 10:42:40.250740287 +0000 UTC m=+0.144380352 container init ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202)
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: + sudo -E kolla_set_configs
Dec  9 05:42:40 np0005551604 podman[222833]: 2025-12-09 10:42:40.278969914 +0000 UTC m=+0.172609949 container start ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  9 05:42:40 np0005551604 podman[222833]: ceilometer_agent_ipmi
Dec  9 05:42:40 np0005551604 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Validating config file
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Copying service configuration files
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: INFO:__main__:Writing out command to execute
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: ++ cat /run_command
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: + ARGS=
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: + sudo kolla_copy_cacerts
Dec  9 05:42:40 np0005551604 podman[222855]: 2025-12-09 10:42:40.34697929 +0000 UTC m=+0.057018429 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  9 05:42:40 np0005551604 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-d49b0d9dbbb7496.service: Main process exited, code=exited, status=1/FAILURE
Dec  9 05:42:40 np0005551604 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-d49b0d9dbbb7496.service: Failed with result 'exit-code'.
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: + [[ ! -n '' ]]
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: + . kolla_extend_start
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: + umask 0022
Dec  9 05:42:40 np0005551604 ceilometer_agent_ipmi[222848]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.271 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.272 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.273 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.274 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.275 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.276 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.277 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.278 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.279 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.280 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.281 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.282 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.283 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.284 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.285 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.286 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.306 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.308 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.309 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  9 05:42:41 np0005551604 python3.9[223032]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/telemetry-power-monitoring config_pattern=kepler.json debug=False
Dec  9 05:42:41 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.429 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp1g99b78j/privsep.sock']
Dec  9 05:42:41 np0005551604 podman[223140]: 2025-12-09 10:42:41.918448293 +0000 UTC m=+0.075568373 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.110 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.111 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp1g99b78j/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:41.998 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.006 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.010 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.010 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  9 05:42:42 np0005551604 python3.9[223213]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/config-data
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.240 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.241 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.242 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.243 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.243 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.243 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.243 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.246 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.247 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.248 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.249 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.250 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.250 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.250 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.251 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.252 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.253 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.254 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.255 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.256 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.257 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.258 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.259 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.260 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.261 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.262 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.263 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.264 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.265 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.266 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.267 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  9 05:42:42 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:42.269 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  9 05:42:43 np0005551604 python3[223370]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/telemetry-power-monitoring config_id=edpm config_overrides={} config_patterns=kepler.json log_base_path=/var/log/containers/stdouts debug=False
Dec  9 05:42:43 np0005551604 podman[223403]: 2025-12-09 10:42:43.565317939 +0000 UTC m=+0.049969057 container create 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, build-date=2024-09-18T21:23:30, release=1214.1726694543, distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  9 05:42:43 np0005551604 podman[223403]: 2025-12-09 10:42:43.537515775 +0000 UTC m=+0.022166913 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Dec  9 05:42:43 np0005551604 python3[223370]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env EXPOSE_CONTAINER_METRICS=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_VM_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=edpm --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Dec  9 05:42:44 np0005551604 python3.9[223592]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:42:45 np0005551604 python3.9[223746]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:45 np0005551604 podman[223869]: 2025-12-09 10:42:45.952436565 +0000 UTC m=+0.105314879 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 05:42:46 np0005551604 python3.9[223912]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1765276965.1988633-489-12468935243587/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:46 np0005551604 python3.9[223995]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Dec  9 05:42:46 np0005551604 systemd[1]: Reloading.
Dec  9 05:42:46 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:42:46 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:42:47 np0005551604 python3.9[224106]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Dec  9 05:42:47 np0005551604 systemd[1]: Reloading.
Dec  9 05:42:47 np0005551604 systemd-sysv-generator: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Dec  9 05:42:47 np0005551604 systemd-rc-local-generator: /etc/rc.d/rc.local is not marked executable, skipping.
Dec  9 05:42:48 np0005551604 systemd[1]: Starting kepler container...
Dec  9 05:42:48 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:42:48 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.
Dec  9 05:42:48 np0005551604 podman[224145]: 2025-12-09 10:42:48.401895412 +0000 UTC m=+0.132637469 container init 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, name=ubi9, config_id=edpm)
Dec  9 05:42:48 np0005551604 kepler[224161]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  9 05:42:48 np0005551604 podman[224145]: 2025-12-09 10:42:48.424236298 +0000 UTC m=+0.154978305 container start 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, name=ubi9, vendor=Red Hat, Inc., container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Dec  9 05:42:48 np0005551604 kepler[224161]: I1209 10:42:48.426637       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  9 05:42:48 np0005551604 kepler[224161]: I1209 10:42:48.426789       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  9 05:42:48 np0005551604 kepler[224161]: I1209 10:42:48.426820       1 config.go:295] kernel version: 5.14
Dec  9 05:42:48 np0005551604 kepler[224161]: I1209 10:42:48.427335       1 power.go:78] Unable to obtain power, use estimate method
Dec  9 05:42:48 np0005551604 podman[224145]: kepler
Dec  9 05:42:48 np0005551604 kepler[224161]: I1209 10:42:48.427344       1 redfish.go:169] failed to get redfish credential file path
Dec  9 05:42:48 np0005551604 kepler[224161]: I1209 10:42:48.427754       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  9 05:42:48 np0005551604 kepler[224161]: I1209 10:42:48.427824       1 power.go:79] using none to obtain power
Dec  9 05:42:48 np0005551604 kepler[224161]: E1209 10:42:48.427838       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  9 05:42:48 np0005551604 kepler[224161]: E1209 10:42:48.427858       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  9 05:42:48 np0005551604 kepler[224161]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  9 05:42:48 np0005551604 kepler[224161]: I1209 10:42:48.429291       1 exporter.go:84] Number of CPUs: 8
Dec  9 05:42:48 np0005551604 systemd[1]: Started kepler container.
Dec  9 05:42:48 np0005551604 podman[224171]: 2025-12-09 10:42:48.533552494 +0000 UTC m=+0.092234272 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, release=1214.1726694543, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vcs-type=git, version=9.4, managed_by=edpm_ansible, com.redhat.component=ubi9-container, config_id=edpm, name=ubi9, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  9 05:42:48 np0005551604 systemd[1]: 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-7ead5e0f1bade9d9.service: Main process exited, code=exited, status=1/FAILURE
Dec  9 05:42:48 np0005551604 systemd[1]: 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-7ead5e0f1bade9d9.service: Failed with result 'exit-code'.
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.008517       1 watcher.go:83] Using in cluster k8s config
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.008592       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  9 05:42:49 np0005551604 kepler[224161]: E1209 10:42:49.008720       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.016147       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.016212       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.023471       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.023503       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.035160       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.035221       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.035250       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.048271       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.048323       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.048332       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.048341       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.048350       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.048367       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.049176       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.049253       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.049332       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.049430       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.049675       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  9 05:42:49 np0005551604 kepler[224161]: I1209 10:42:49.050953       1 exporter.go:208] Started Kepler in 624.492393ms
Dec  9 05:42:49 np0005551604 python3.9[224356]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:42:49 np0005551604 systemd[1]: Stopping ceilometer_agent_ipmi container...
Dec  9 05:42:49 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:49.496 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Dec  9 05:42:49 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:49.598 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Dec  9 05:42:49 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:49.599 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Dec  9 05:42:49 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:49.599 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Dec  9 05:42:49 np0005551604 ceilometer_agent_ipmi[222848]: 2025-12-09 10:42:49.613 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Dec  9 05:42:49 np0005551604 systemd[1]: libpod-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope: Deactivated successfully.
Dec  9 05:42:49 np0005551604 systemd[1]: libpod-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope: Consumed 2.296s CPU time.
Dec  9 05:42:49 np0005551604 podman[224361]: 2025-12-09 10:42:49.794344352 +0000 UTC m=+0.369317211 container died ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  9 05:42:49 np0005551604 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-d49b0d9dbbb7496.timer: Deactivated successfully.
Dec  9 05:42:49 np0005551604 systemd[1]: Stopped /usr/bin/podman healthcheck run ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.
Dec  9 05:42:49 np0005551604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-userdata-shm.mount: Deactivated successfully.
Dec  9 05:42:49 np0005551604 systemd[1]: var-lib-containers-storage-overlay-9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774-merged.mount: Deactivated successfully.
Dec  9 05:42:49 np0005551604 podman[224361]: 2025-12-09 10:42:49.88900168 +0000 UTC m=+0.463974549 container cleanup ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3)
Dec  9 05:42:49 np0005551604 podman[224361]: ceilometer_agent_ipmi
Dec  9 05:42:50 np0005551604 podman[224386]: ceilometer_agent_ipmi
Dec  9 05:42:50 np0005551604 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Dec  9 05:42:50 np0005551604 systemd[1]: Stopped ceilometer_agent_ipmi container.
Dec  9 05:42:50 np0005551604 systemd[1]: Starting ceilometer_agent_ipmi container...
Dec  9 05:42:50 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:42:50 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Dec  9 05:42:50 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Dec  9 05:42:50 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/var/lib/openstack/config supports timestamps until 2038 (0x7fffffff)
Dec  9 05:42:50 np0005551604 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/9890bb2656e1ba2d8e0afc2f648a786f5425824418ee3f17938feb8a5d0f6774/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Dec  9 05:42:50 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.
Dec  9 05:42:50 np0005551604 podman[224396]: 2025-12-09 10:42:50.232668994 +0000 UTC m=+0.183751286 container init ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: + sudo -E kolla_set_configs
Dec  9 05:42:50 np0005551604 podman[224396]: 2025-12-09 10:42:50.268694862 +0000 UTC m=+0.219777114 container start ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:42:50 np0005551604 podman[224396]: ceilometer_agent_ipmi
Dec  9 05:42:50 np0005551604 systemd[1]: Started ceilometer_agent_ipmi container.
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Validating config file
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Copying service configuration files
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Copying /var/lib/openstack/config/polling.yaml to /etc/ceilometer/polling.yaml
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Copying /var/lib/openstack/config/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Copying /var/lib/openstack/config/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: INFO:__main__:Writing out command to execute
Dec  9 05:42:50 np0005551604 podman[224419]: 2025-12-09 10:42:50.342542975 +0000 UTC m=+0.062987429 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: ++ cat /run_command
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: + ARGS=
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: + sudo kolla_copy_cacerts
Dec  9 05:42:50 np0005551604 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-31abf5bbaf1ad868.service: Main process exited, code=exited, status=1/FAILURE
Dec  9 05:42:50 np0005551604 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-31abf5bbaf1ad868.service: Failed with result 'exit-code'.
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: + [[ ! -n '' ]]
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: + . kolla_extend_start
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: + umask 0022
Dec  9 05:42:50 np0005551604 ceilometer_agent_ipmi[224412]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.216 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.216 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.217 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.218 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.219 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.220 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.221 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.222 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.223 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.224 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.225 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.226 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.227 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.228 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.229 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.230 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.231 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.252 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.254 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.255 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.283 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp_izkm7uc/privsep.sock']
Dec  9 05:42:51 np0005551604 python3.9[224592]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 05:42:51 np0005551604 systemd[1]: Stopping kepler container...
Dec  9 05:42:51 np0005551604 kepler[224161]: I1209 10:42:51.533305       1 exporter.go:218] Received shutdown signal
Dec  9 05:42:51 np0005551604 kepler[224161]: I1209 10:42:51.534007       1 exporter.go:226] Exiting...
Dec  9 05:42:51 np0005551604 systemd[1]: libpod-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.scope: Deactivated successfully.
Dec  9 05:42:51 np0005551604 podman[224603]: 2025-12-09 10:42:51.759007676 +0000 UTC m=+0.283410980 container died 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, distribution-scope=public, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, version=9.4, io.openshift.tags=base rhel9, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, config_id=edpm, container_name=kepler, architecture=x86_64, com.redhat.component=ubi9-container)
Dec  9 05:42:51 np0005551604 systemd[1]: 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-7ead5e0f1bade9d9.timer: Deactivated successfully.
Dec  9 05:42:51 np0005551604 systemd[1]: Stopped /usr/bin/podman healthcheck run 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.
Dec  9 05:42:51 np0005551604 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-userdata-shm.mount: Deactivated successfully.
Dec  9 05:42:51 np0005551604 systemd[1]: var-lib-containers-storage-overlay-4981261eff032724feeb37b979fc07f98c64089f68922d5ec592f23cf06ee21b-merged.mount: Deactivated successfully.
Dec  9 05:42:51 np0005551604 podman[224603]: 2025-12-09 10:42:51.809078685 +0000 UTC m=+0.333481969 container cleanup 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.expose-services=, release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9, version=9.4, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, config_id=edpm)
Dec  9 05:42:51 np0005551604 podman[224603]: kepler
Dec  9 05:42:51 np0005551604 podman[224630]: kepler
Dec  9 05:42:51 np0005551604 systemd[1]: edpm_kepler.service: Deactivated successfully.
Dec  9 05:42:51 np0005551604 systemd[1]: Stopped kepler container.
Dec  9 05:42:51 np0005551604 systemd[1]: Starting kepler container...
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.960 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.961 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp_izkm7uc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.855 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.860 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.862 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Dec  9 05:42:51 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:51.862 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Dec  9 05:42:52 np0005551604 systemd[1]: Started libcrun container.
Dec  9 05:42:52 np0005551604 systemd[1]: Started /usr/bin/podman healthcheck run 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.
Dec  9 05:42:52 np0005551604 podman[224644]: 2025-12-09 10:42:52.055555462 +0000 UTC m=+0.126454872 container init 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, vcs-type=git, io.openshift.expose-services=, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.063 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.063 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.064 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.064 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.064 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.064 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.065 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.068 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.069 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.070 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.071 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.072 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.073 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.074 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.075 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 kepler[224661]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.076 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.077 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.078 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.079 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.080 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.081 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.082 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.083 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.083990       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.084118       1 config.go:293] using gCgroup ID in the BPF program: true
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.084136       1 config.go:295] kernel version: 5.14
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.084 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.085 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.086029       1 power.go:78] Unable to obtain power, use estimate method
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.086055       1 redfish.go:169] failed to get redfish credential file path
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.086450       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.086464       1 power.go:79] using none to obtain power
Dec  9 05:42:52 np0005551604 kepler[224661]: E1209 10:42:52.086479       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 kepler[224661]: E1209 10:42:52.086502       1 exporter.go:154] failed to init GPU accelerators: no devices found
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.086 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.087 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.087 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.087 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Dec  9 05:42:52 np0005551604 kepler[224661]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.088597       1 exporter.go:84] Number of CPUs: 8
Dec  9 05:42:52 np0005551604 ceilometer_agent_ipmi[224412]: 2025-12-09 10:42:52.090 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Dec  9 05:42:52 np0005551604 podman[224644]: 2025-12-09 10:42:52.092902615 +0000 UTC m=+0.163802025 container start 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, config_id=edpm, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., architecture=x86_64, container_name=kepler, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.openshift.expose-services=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Dec  9 05:42:52 np0005551604 podman[224644]: kepler
Dec  9 05:42:52 np0005551604 systemd[1]: Started kepler container.
Dec  9 05:42:52 np0005551604 podman[224672]: 2025-12-09 10:42:52.174732815 +0000 UTC m=+0.072096757 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, version=9.4, vcs-type=git, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, io.openshift.tags=base rhel9, architecture=x86_64, com.redhat.component=ubi9-container, io.openshift.expose-services=)
Dec  9 05:42:52 np0005551604 systemd[1]: 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-66537a7c068fd481.service: Main process exited, code=exited, status=1/FAILURE
Dec  9 05:42:52 np0005551604 systemd[1]: 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d-66537a7c068fd481.service: Failed with result 'exit-code'.
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.657016       1 watcher.go:83] Using in cluster k8s config
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.657106       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Dec  9 05:42:52 np0005551604 kepler[224661]: E1209 10:42:52.657215       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.664698       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.664721       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.672154       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.672208       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.687499       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.687567       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.687590       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.698078       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.698138       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.698148       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.698157       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.698167       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.698184       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.698301       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.698353       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.698392       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.698427       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.698631       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Dec  9 05:42:52 np0005551604 kepler[224661]: I1209 10:42:52.699137       1 exporter.go:208] Started Kepler in 615.421496ms
Dec  9 05:42:52 np0005551604 python3.9[224850]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Dec  9 05:42:54 np0005551604 podman[225012]: 2025-12-09 10:42:54.037484354 +0000 UTC m=+0.101648768 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, config_id=edpm, managed_by=edpm_ansible)
Dec  9 05:42:54 np0005551604 python3.9[225013]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Dec  9 05:42:54 np0005551604 podman[225126]: 2025-12-09 10:42:54.986559854 +0000 UTC m=+0.122272269 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec  9 05:42:55 np0005551604 python3.9[225214]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:42:55 np0005551604 systemd[1]: Started libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope.
Dec  9 05:42:55 np0005551604 podman[225215]: 2025-12-09 10:42:55.494399552 +0000 UTC m=+0.144671495 container exec e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2)
Dec  9 05:42:55 np0005551604 podman[225215]: 2025-12-09 10:42:55.509725698 +0000 UTC m=+0.159997661 container exec_died e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:42:55 np0005551604 systemd[1]: libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope: Deactivated successfully.
Dec  9 05:42:56 np0005551604 python3.9[225398]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:42:56 np0005551604 systemd[1]: Started libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope.
Dec  9 05:42:56 np0005551604 podman[225399]: 2025-12-09 10:42:56.510245854 +0000 UTC m=+0.131823378 container exec e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  9 05:42:56 np0005551604 podman[225399]: 2025-12-09 10:42:56.55467704 +0000 UTC m=+0.176254574 container exec_died e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  9 05:42:56 np0005551604 systemd[1]: libpod-conmon-e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6.scope: Deactivated successfully.
Dec  9 05:42:57 np0005551604 python3.9[225581]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:42:58 np0005551604 python3.9[225733]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Dec  9 05:42:59 np0005551604 python3.9[225897]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:42:59 np0005551604 systemd[1]: Started libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope.
Dec  9 05:42:59 np0005551604 podman[225898]: 2025-12-09 10:42:59.666228839 +0000 UTC m=+0.123103671 container exec 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:42:59 np0005551604 podman[225898]: 2025-12-09 10:42:59.698923897 +0000 UTC m=+0.155798759 container exec_died 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, managed_by=edpm_ansible, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:42:59 np0005551604 systemd[1]: libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope: Deactivated successfully.
Dec  9 05:42:59 np0005551604 podman[203687]: time="2025-12-09T10:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 05:42:59 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28292 "" "Go-http-client/1.1"
Dec  9 05:42:59 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4261 "" "Go-http-client/1.1"
Dec  9 05:43:00 np0005551604 podman[226050]: 2025-12-09 10:43:00.456728237 +0000 UTC m=+0.094756882 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_id=edpm, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  9 05:43:00 np0005551604 python3.9[226098]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:00 np0005551604 systemd[1]: Started libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope.
Dec  9 05:43:00 np0005551604 podman[226099]: 2025-12-09 10:43:00.816152669 +0000 UTC m=+0.132443834 container exec 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Dec  9 05:43:00 np0005551604 podman[226099]: 2025-12-09 10:43:00.850040818 +0000 UTC m=+0.166331893 container exec_died 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:43:00 np0005551604 systemd[1]: libpod-conmon-8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403.scope: Deactivated successfully.
Dec  9 05:43:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:43:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 05:43:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 05:43:01 np0005551604 openstack_network_exporter[205823]: 
Dec  9 05:43:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 05:43:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:43:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 05:43:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 05:43:01 np0005551604 openstack_network_exporter[205823]: 
Dec  9 05:43:01 np0005551604 python3.9[226279]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:01 np0005551604 podman[226278]: 2025-12-09 10:43:01.860406321 +0000 UTC m=+0.207673255 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:43:02 np0005551604 python3.9[226455]: ansible-containers.podman.podman_container_info Invoked with name=['multipathd'] executable=podman
Dec  9 05:43:03 np0005551604 podman[226593]: 2025-12-09 10:43:03.799476571 +0000 UTC m=+0.111367733 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 05:43:04 np0005551604 python3.9[226644]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:04 np0005551604 systemd[1]: Started libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope.
Dec  9 05:43:04 np0005551604 podman[226645]: 2025-12-09 10:43:04.177865177 +0000 UTC m=+0.130339558 container exec 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec  9 05:43:04 np0005551604 podman[226645]: 2025-12-09 10:43:04.211942031 +0000 UTC m=+0.164416382 container exec_died 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 05:43:04 np0005551604 systemd[1]: libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope: Deactivated successfully.
Dec  9 05:43:05 np0005551604 python3.9[226825]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=multipathd detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:05 np0005551604 systemd[1]: Started libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope.
Dec  9 05:43:05 np0005551604 podman[226826]: 2025-12-09 10:43:05.414703853 +0000 UTC m=+0.194621830 container exec 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Dec  9 05:43:05 np0005551604 podman[226826]: 2025-12-09 10:43:05.422571157 +0000 UTC m=+0.202489164 container exec_died 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202)
Dec  9 05:43:05 np0005551604 systemd[1]: libpod-conmon-0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a.scope: Deactivated successfully.
Dec  9 05:43:06 np0005551604 python3.9[227007]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/multipathd recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:07 np0005551604 python3.9[227159]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Dec  9 05:43:08 np0005551604 python3.9[227324]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:08 np0005551604 systemd[1]: Started libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope.
Dec  9 05:43:08 np0005551604 podman[227325]: 2025-12-09 10:43:08.556137766 +0000 UTC m=+0.097523027 container exec b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  9 05:43:08 np0005551604 podman[227325]: 2025-12-09 10:43:08.563054353 +0000 UTC m=+0.104439604 container exec_died b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Dec  9 05:43:08 np0005551604 systemd[1]: libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Deactivated successfully.
Dec  9 05:43:09 np0005551604 python3.9[227506]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:09 np0005551604 systemd[1]: Started libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope.
Dec  9 05:43:09 np0005551604 podman[227507]: 2025-12-09 10:43:09.657650262 +0000 UTC m=+0.105373781 container exec b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  9 05:43:09 np0005551604 podman[227507]: 2025-12-09 10:43:09.690829912 +0000 UTC m=+0.138553341 container exec_died b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:43:09 np0005551604 systemd[1]: libpod-conmon-b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614.scope: Deactivated successfully.
Dec  9 05:43:10 np0005551604 python3.9[227689]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:11 np0005551604 python3.9[227841]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Dec  9 05:43:12 np0005551604 podman[227978]: 2025-12-09 10:43:12.461391541 +0000 UTC m=+0.086912299 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:43:12 np0005551604 python3.9[228025]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:12 np0005551604 systemd[1]: Started libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope.
Dec  9 05:43:12 np0005551604 podman[228027]: 2025-12-09 10:43:12.836394836 +0000 UTC m=+0.161841382 container exec d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 05:43:12 np0005551604 podman[228027]: 2025-12-09 10:43:12.869482344 +0000 UTC m=+0.194928880 container exec_died d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 05:43:12 np0005551604 systemd[1]: libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope: Deactivated successfully.
Dec  9 05:43:13 np0005551604 python3.9[228210]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:14 np0005551604 systemd[1]: Started libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope.
Dec  9 05:43:14 np0005551604 podman[228211]: 2025-12-09 10:43:14.483519525 +0000 UTC m=+0.556491930 container exec d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 05:43:14 np0005551604 podman[228211]: 2025-12-09 10:43:14.700576523 +0000 UTC m=+0.773548948 container exec_died d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 05:43:14 np0005551604 systemd[1]: libpod-conmon-d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b.scope: Deactivated successfully.
Dec  9 05:43:15 np0005551604 python3.9[228393]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:16 np0005551604 podman[228517]: 2025-12-09 10:43:16.433090769 +0000 UTC m=+0.107735364 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 05:43:16 np0005551604 python3.9[228563]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Dec  9 05:43:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:43:16.971 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:43:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:43:16.971 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:43:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:43:16.972 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:43:17 np0005551604 python3.9[228736]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:17 np0005551604 systemd[1]: Started libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope.
Dec  9 05:43:17 np0005551604 podman[228737]: 2025-12-09 10:43:17.777520046 +0000 UTC m=+0.127107830 container exec 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 05:43:17 np0005551604 podman[228737]: 2025-12-09 10:43:17.814300794 +0000 UTC m=+0.163888568 container exec_died 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 05:43:17 np0005551604 nova_compute[189493]: 2025-12-09 10:43:17.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:43:17 np0005551604 systemd[1]: libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope: Deactivated successfully.
Dec  9 05:43:18 np0005551604 python3.9[228917]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:18 np0005551604 nova_compute[189493]: 2025-12-09 10:43:18.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:43:18 np0005551604 nova_compute[189493]: 2025-12-09 10:43:18.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:43:18 np0005551604 nova_compute[189493]: 2025-12-09 10:43:18.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:43:19 np0005551604 systemd[1]: Started libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope.
Dec  9 05:43:19 np0005551604 podman[228918]: 2025-12-09 10:43:19.828616315 +0000 UTC m=+1.042881057 container exec 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 05:43:19 np0005551604 nova_compute[189493]: 2025-12-09 10:43:19.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:43:19 np0005551604 nova_compute[189493]: 2025-12-09 10:43:19.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 05:43:19 np0005551604 nova_compute[189493]: 2025-12-09 10:43:19.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 05:43:19 np0005551604 nova_compute[189493]: 2025-12-09 10:43:19.861 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 05:43:19 np0005551604 nova_compute[189493]: 2025-12-09 10:43:19.862 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:43:19 np0005551604 podman[228918]: 2025-12-09 10:43:19.865124146 +0000 UTC m=+1.079388898 container exec_died 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 05:43:19 np0005551604 nova_compute[189493]: 2025-12-09 10:43:19.867 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 05:43:19 np0005551604 systemd[1]: libpod-conmon-8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9.scope: Deactivated successfully.
Dec  9 05:43:20 np0005551604 podman[229070]: 2025-12-09 10:43:20.492721483 +0000 UTC m=+0.061701445 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  9 05:43:20 np0005551604 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-31abf5bbaf1ad868.service: Main process exited, code=exited, status=1/FAILURE
Dec  9 05:43:20 np0005551604 systemd[1]: ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92-31abf5bbaf1ad868.service: Failed with result 'exit-code'.
Dec  9 05:43:20 np0005551604 python3.9[229116]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:20 np0005551604 nova_compute[189493]: 2025-12-09 10:43:20.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:43:20 np0005551604 nova_compute[189493]: 2025-12-09 10:43:20.857 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:43:20 np0005551604 nova_compute[189493]: 2025-12-09 10:43:20.857 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:43:20 np0005551604 nova_compute[189493]: 2025-12-09 10:43:20.858 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:43:20 np0005551604 nova_compute[189493]: 2025-12-09 10:43:20.887 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:43:20 np0005551604 nova_compute[189493]: 2025-12-09 10:43:20.887 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:43:20 np0005551604 nova_compute[189493]: 2025-12-09 10:43:20.888 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:43:20 np0005551604 nova_compute[189493]: 2025-12-09 10:43:20.888 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 05:43:21 np0005551604 nova_compute[189493]: 2025-12-09 10:43:21.366 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 05:43:21 np0005551604 nova_compute[189493]: 2025-12-09 10:43:21.367 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5658MB free_disk=72.23981094360352GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 05:43:21 np0005551604 nova_compute[189493]: 2025-12-09 10:43:21.367 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:43:21 np0005551604 nova_compute[189493]: 2025-12-09 10:43:21.368 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:43:21 np0005551604 nova_compute[189493]: 2025-12-09 10:43:21.467 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 05:43:21 np0005551604 nova_compute[189493]: 2025-12-09 10:43:21.467 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 05:43:21 np0005551604 nova_compute[189493]: 2025-12-09 10:43:21.492 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 05:43:21 np0005551604 nova_compute[189493]: 2025-12-09 10:43:21.509 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 05:43:21 np0005551604 nova_compute[189493]: 2025-12-09 10:43:21.511 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 05:43:21 np0005551604 nova_compute[189493]: 2025-12-09 10:43:21.511 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.143s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:43:21 np0005551604 python3.9[229268]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Dec  9 05:43:22 np0005551604 podman[229404]: 2025-12-09 10:43:22.896639875 +0000 UTC m=+0.134440959 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, release-0.7.12=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, release=1214.1726694543, architecture=x86_64, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Dec  9 05:43:23 np0005551604 python3.9[229451]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:23 np0005551604 systemd[1]: Started libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope.
Dec  9 05:43:23 np0005551604 podman[229453]: 2025-12-09 10:43:23.224297975 +0000 UTC m=+0.135695163 container exec 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.expose-services=, version=9.6, config_id=edpm, container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 05:43:23 np0005551604 podman[229472]: 2025-12-09 10:43:23.346919342 +0000 UTC m=+0.089667894 container exec_died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, release=1755695350, managed_by=edpm_ansible, name=ubi9-minimal, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 05:43:23 np0005551604 podman[229453]: 2025-12-09 10:43:23.380088581 +0000 UTC m=+0.291485779 container exec_died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, distribution-scope=public, config_id=edpm, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64)
Dec  9 05:43:23 np0005551604 systemd[1]: libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope: Deactivated successfully.
Dec  9 05:43:24 np0005551604 podman[229606]: 2025-12-09 10:43:24.209377472 +0000 UTC m=+0.089156800 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  9 05:43:24 np0005551604 python3.9[229653]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:24 np0005551604 systemd[1]: Started libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope.
Dec  9 05:43:24 np0005551604 podman[229655]: 2025-12-09 10:43:24.551573656 +0000 UTC m=+0.107058526 container exec 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.openshift.expose-services=, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Dec  9 05:43:24 np0005551604 podman[229655]: 2025-12-09 10:43:24.649598286 +0000 UTC m=+0.205083106 container exec_died 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, config_id=edpm, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, release=1755695350)
Dec  9 05:43:24 np0005551604 systemd[1]: libpod-conmon-5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d.scope: Deactivated successfully.
Dec  9 05:43:25 np0005551604 podman[229806]: 2025-12-09 10:43:25.463738904 +0000 UTC m=+0.073884115 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  9 05:43:25 np0005551604 python3.9[229852]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:26 np0005551604 python3.9[230004]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Dec  9 05:43:27 np0005551604 python3.9[230168]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:28 np0005551604 systemd[1]: Started libpod-conmon-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope.
Dec  9 05:43:28 np0005551604 podman[230169]: 2025-12-09 10:43:28.092697152 +0000 UTC m=+0.195772363 container exec ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:43:28 np0005551604 podman[230169]: 2025-12-09 10:43:28.125343587 +0000 UTC m=+0.228418818 container exec_died ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  9 05:43:28 np0005551604 systemd[1]: libpod-conmon-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope: Deactivated successfully.
Dec  9 05:43:29 np0005551604 python3.9[230350]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:29 np0005551604 systemd[1]: Started libpod-conmon-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope.
Dec  9 05:43:29 np0005551604 podman[230351]: 2025-12-09 10:43:29.465859218 +0000 UTC m=+0.231229076 container exec ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ceilometer_agent_ipmi, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 05:43:29 np0005551604 podman[230351]: 2025-12-09 10:43:29.714291028 +0000 UTC m=+0.479660866 container exec_died ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 05:43:29 np0005551604 podman[203687]: time="2025-12-09T10:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 05:43:29 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28291 "" "Go-http-client/1.1"
Dec  9 05:43:29 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4252 "" "Go-http-client/1.1"
Dec  9 05:43:29 np0005551604 systemd[1]: libpod-conmon-ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92.scope: Deactivated successfully.
Dec  9 05:43:30 np0005551604 podman[230532]: 2025-12-09 10:43:30.649957504 +0000 UTC m=+0.086363364 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, maintainer=Red Hat, Inc., vcs-type=git, config_id=edpm, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, release=1755695350, version=9.6)
Dec  9 05:43:30 np0005551604 python3.9[230533]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:31 np0005551604 openstack_network_exporter[205823]: ERROR   10:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 05:43:31 np0005551604 openstack_network_exporter[205823]: ERROR   10:43:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 05:43:31 np0005551604 openstack_network_exporter[205823]: ERROR   10:43:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 05:43:31 np0005551604 openstack_network_exporter[205823]: ERROR   10:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 05:43:31 np0005551604 openstack_network_exporter[205823]: 
Dec  9 05:43:31 np0005551604 openstack_network_exporter[205823]: ERROR   10:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 05:43:31 np0005551604 openstack_network_exporter[205823]: 
Dec  9 05:43:31 np0005551604 python3.9[230704]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Dec  9 05:43:32 np0005551604 podman[230841]: 2025-12-09 10:43:32.764835724 +0000 UTC m=+0.153853156 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  9 05:43:32 np0005551604 python3.9[230889]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:33 np0005551604 systemd[1]: Started libpod-conmon-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.scope.
Dec  9 05:43:33 np0005551604 podman[230896]: 2025-12-09 10:43:33.060603378 +0000 UTC m=+0.097195619 container exec 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_id=edpm, version=9.4, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vcs-type=git)
Dec  9 05:43:33 np0005551604 podman[230896]: 2025-12-09 10:43:33.094324273 +0000 UTC m=+0.130916544 container exec_died 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, architecture=x86_64, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, version=9.4, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., name=ubi9, config_id=edpm, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.buildah.version=1.29.0)
Dec  9 05:43:33 np0005551604 systemd[1]: libpod-conmon-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.scope: Deactivated successfully.
Dec  9 05:43:33 np0005551604 podman[231080]: 2025-12-09 10:43:33.938496516 +0000 UTC m=+0.083365582 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 05:43:33 np0005551604 python3.9[231079]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Dec  9 05:43:34 np0005551604 systemd[1]: Started libpod-conmon-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.scope.
Dec  9 05:43:34 np0005551604 podman[231104]: 2025-12-09 10:43:34.065959335 +0000 UTC m=+0.077938756 container exec 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.29.0, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, architecture=x86_64, vendor=Red Hat, Inc., vcs-type=git, version=9.4, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release-0.7.12=)
Dec  9 05:43:34 np0005551604 podman[231104]: 2025-12-09 10:43:34.096183944 +0000 UTC m=+0.108163345 container exec_died 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, architecture=x86_64, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, distribution-scope=public, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, release=1214.1726694543, release-0.7.12=)
Dec  9 05:43:34 np0005551604 systemd[1]: libpod-conmon-8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d.scope: Deactivated successfully.
Dec  9 05:43:35 np0005551604 python3.9[231288]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:36 np0005551604 python3.9[231441]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:36 np0005551604 python3.9[231593]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:43:37 np0005551604 python3.9[231716]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1765277016.396287-844-163438373451777/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:38 np0005551604 python3.9[231868]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:39 np0005551604 python3.9[232020]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:43:40 np0005551604 python3.9[232098]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:41 np0005551604 python3.9[232250]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:43:41 np0005551604 python3.9[232328]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.47_zv256 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:42 np0005551604 python3.9[232480]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:43:42 np0005551604 podman[232524]: 2025-12-09 10:43:42.915238766 +0000 UTC m=+0.072838457 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:43:43 np0005551604 python3.9[232575]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:44 np0005551604 python3.9[232727]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:43:45 np0005551604 python3[232880]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Dec  9 05:43:46 np0005551604 python3.9[233032]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:43:46 np0005551604 podman[233110]: 2025-12-09 10:43:46.623137156 +0000 UTC m=+0.063087393 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 05:43:46 np0005551604 python3.9[233111]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:47 np0005551604 python3.9[233285]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:43:48 np0005551604 python3.9[233363]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:49 np0005551604 python3.9[233515]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:43:49 np0005551604 python3.9[233594]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:50 np0005551604 podman[233746]: 2025-12-09 10:43:50.648542737 +0000 UTC m=+0.086085071 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm)
Dec  9 05:43:50 np0005551604 python3.9[233747]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:43:51 np0005551604 python3.9[233844]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:52 np0005551604 python3.9[233996]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:43:53 np0005551604 podman[234093]: 2025-12-09 10:43:53.187197077 +0000 UTC m=+0.091472429 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, io.openshift.expose-services=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_id=edpm, container_name=kepler, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  9 05:43:53 np0005551604 python3.9[234139]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1765277031.7062-969-139715743232838/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:54 np0005551604 python3.9[234291]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:54 np0005551604 podman[234381]: 2025-12-09 10:43:54.979234812 +0000 UTC m=+0.116551723 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4)
Dec  9 05:43:55 np0005551604 python3.9[234463]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:43:55 np0005551604 podman[234543]: 2025-12-09 10:43:55.920058828 +0000 UTC m=+0.077243729 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  9 05:43:56 np0005551604 python3.9[234636]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"#012include "/etc/nftables/edpm-chains.nft"#012include "/etc/nftables/edpm-rules.nft"#012include "/etc/nftables/edpm-jumps.nft"#012 path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:43:57 np0005551604 python3.9[234788]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:43:58 np0005551604 python3.9[234941]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Dec  9 05:43:59 np0005551604 python3.9[235095]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 05:43:59 np0005551604 podman[203687]: time="2025-12-09T10:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 05:43:59 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 05:43:59 np0005551604 podman[203687]: @ - - [09/Dec/2025:10:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4255 "" "Go-http-client/1.1"
Dec  9 05:44:00 np0005551604 python3.9[235250]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:44:00 np0005551604 systemd-logind[806]: Session 27 logged out. Waiting for processes to exit.
Dec  9 05:44:00 np0005551604 systemd[1]: session-27.scope: Deactivated successfully.
Dec  9 05:44:00 np0005551604 systemd[1]: session-27.scope: Consumed 1min 38.624s CPU time.
Dec  9 05:44:00 np0005551604 systemd-logind[806]: Removed session 27.
Dec  9 05:44:01 np0005551604 podman[235275]: 2025-12-09 10:44:01.001295804 +0000 UTC m=+0.149740609 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6)
Dec  9 05:44:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:44:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 05:44:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 05:44:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:44:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 05:44:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 05:44:01 np0005551604 openstack_network_exporter[205823]: 
Dec  9 05:44:01 np0005551604 openstack_network_exporter[205823]: ERROR   10:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 05:44:01 np0005551604 openstack_network_exporter[205823]: 
Dec  9 05:44:02 np0005551604 podman[235296]: 2025-12-09 10:44:02.948687671 +0000 UTC m=+0.100480024 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  9 05:44:04 np0005551604 podman[235321]: 2025-12-09 10:44:04.969138403 +0000 UTC m=+0.121390444 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 05:44:07 np0005551604 systemd-logind[806]: New session 28 of user zuul.
Dec  9 05:44:07 np0005551604 systemd[1]: Started Session 28 of User zuul.
Dec  9 05:44:08 np0005551604 python3.9[235498]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 05:44:10 np0005551604 python3.9[235654]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Dec  9 05:44:11 np0005551604 python3.9[235809]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Dec  9 05:44:12 np0005551604 python3.9[235893]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Dec  9 05:44:13 np0005551604 podman[235896]: 2025-12-09 10:44:13.953688006 +0000 UTC m=+0.105083720 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  9 05:44:16 np0005551604 python3.9[236071]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:44:16 np0005551604 nova_compute[189493]: 2025-12-09 10:44:16.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:44:16 np0005551604 nova_compute[189493]: 2025-12-09 10:44:16.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  9 05:44:16 np0005551604 nova_compute[189493]: 2025-12-09 10:44:16.870 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  9 05:44:16 np0005551604 nova_compute[189493]: 2025-12-09 10:44:16.871 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:44:16 np0005551604 nova_compute[189493]: 2025-12-09 10:44:16.871 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  9 05:44:16 np0005551604 nova_compute[189493]: 2025-12-09 10:44:16.887 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:44:16 np0005551604 podman[236092]: 2025-12-09 10:44:16.94045213 +0000 UTC m=+0.095586541 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 05:44:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:44:16.972 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 05:44:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:44:16.973 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 05:44:16 np0005551604 ovn_metadata_agent[106639]: 2025-12-09 10:44:16.973 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 05:44:17 np0005551604 python3.9[236217]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765277055.919134-54-144602057052743/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:44:19 np0005551604 python3.9[236369]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:44:19 np0005551604 nova_compute[189493]: 2025-12-09 10:44:19.095 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:44:19 np0005551604 nova_compute[189493]: 2025-12-09 10:44:19.097 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:44:19 np0005551604 python3.9[236522]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Dec  9 05:44:20 np0005551604 python3.9[236645]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1765277059.3268428-77-32268805749470/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Dec  9 05:44:20 np0005551604 nova_compute[189493]: 2025-12-09 10:44:20.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:44:20 np0005551604 nova_compute[189493]: 2025-12-09 10:44:20.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 05:44:20 np0005551604 nova_compute[189493]: 2025-12-09 10:44:20.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 05:44:20 np0005551604 nova_compute[189493]: 2025-12-09 10:44:20.860 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 05:44:20 np0005551604 nova_compute[189493]: 2025-12-09 10:44:20.860 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:44:20 np0005551604 nova_compute[189493]: 2025-12-09 10:44:20.860 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 05:44:20 np0005551604 podman[236670]: 2025-12-09 10:44:20.933126466 +0000 UTC m=+0.086486783 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  9 05:44:21 np0005551604 python3.9[236814]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Dec  9 10:44:21 compute-0 systemd[1]: Stopping System Logging Service...
Dec  9 10:44:21 compute-0 nova_compute[189493]: 2025-12-09 10:44:21.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:44:21 compute-0 nova_compute[189493]: 2025-12-09 10:44:21.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:44:21 compute-0 nova_compute[189493]: 2025-12-09 10:44:21.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:44:22 compute-0 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] exiting on signal 15.
Dec  9 10:44:22 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Dec  9 10:44:22 compute-0 systemd[1]: Stopped System Logging Service.
Dec  9 10:44:22 compute-0 systemd[1]: rsyslog.service: Consumed 4.290s CPU time, 8.7M memory peak, read 0B from disk, written 6.8M to disk.
Dec  9 10:44:22 compute-0 systemd[1]: Starting System Logging Service...
Dec  9 10:44:22 compute-0 rsyslogd[236818]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="236818" x-info="https://www.rsyslog.com"] start
Dec  9 10:44:22 compute-0 systemd[1]: Started System Logging Service.
Dec  9 10:44:22 compute-0 rsyslogd[236818]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 10:44:22 compute-0 rsyslogd[236818]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Dec  9 10:44:22 compute-0 rsyslogd[236818]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Dec  9 10:44:22 compute-0 rsyslogd[236818]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Dec  9 10:44:22 compute-0 rsyslogd[236818]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Dec  9 10:44:22 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Dec  9 10:44:22 compute-0 systemd[1]: session-28.scope: Consumed 11.417s CPU time.
Dec  9 10:44:22 compute-0 systemd-logind[806]: Session 28 logged out. Waiting for processes to exit.
Dec  9 10:44:22 compute-0 systemd-logind[806]: Removed session 28.
Dec  9 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.871 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.872 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.872 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:44:22 compute-0 nova_compute[189493]: 2025-12-09 10:44:22.872 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.286 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.288 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.311 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.311 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.312 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:44:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.367 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.370 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5706MB free_disk=72.24060821533203GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.371 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.372 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.625 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.626 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.707 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.792 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.792 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.808 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.837 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.865 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.881 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.884 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:44:23 compute-0 nova_compute[189493]: 2025-12-09 10:44:23.885 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.513s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:44:23 compute-0 podman[236848]: 2025-12-09 10:44:23.937471469 +0000 UTC m=+0.087838689 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-type=git, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.openshift.tags=base rhel9, config_id=edpm)
Dec  9 10:44:25 compute-0 podman[236868]: 2025-12-09 10:44:25.978609576 +0000 UTC m=+0.118049014 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_compute)
Dec  9 10:44:26 compute-0 podman[236886]: 2025-12-09 10:44:26.122470213 +0000 UTC m=+0.133999959 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  9 10:44:29 compute-0 podman[203687]: time="2025-12-09T10:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:44:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 10:44:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4265 "" "Go-http-client/1.1"
Dec  9 10:44:31 compute-0 openstack_network_exporter[205823]: ERROR   10:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:44:31 compute-0 openstack_network_exporter[205823]: ERROR   10:44:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:44:31 compute-0 openstack_network_exporter[205823]: ERROR   10:44:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:44:31 compute-0 openstack_network_exporter[205823]: ERROR   10:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:44:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:44:31 compute-0 openstack_network_exporter[205823]: ERROR   10:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:44:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:44:31 compute-0 podman[236904]: 2025-12-09 10:44:31.974294088 +0000 UTC m=+0.118387173 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., config_id=edpm, vendor=Red Hat, Inc., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public)
Dec  9 10:44:33 compute-0 podman[236924]: 2025-12-09 10:44:33.97202926 +0000 UTC m=+0.125383863 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  9 10:44:35 compute-0 podman[236949]: 2025-12-09 10:44:35.939621278 +0000 UTC m=+0.084192010 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 10:44:44 compute-0 podman[236974]: 2025-12-09 10:44:44.784088636 +0000 UTC m=+0.107695791 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  9 10:44:47 compute-0 podman[236992]: 2025-12-09 10:44:47.971044785 +0000 UTC m=+0.116418160 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 10:44:51 compute-0 podman[237016]: 2025-12-09 10:44:51.948594462 +0000 UTC m=+0.098583762 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  9 10:44:54 compute-0 podman[237036]: 2025-12-09 10:44:54.91613549 +0000 UTC m=+0.075282867 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, architecture=x86_64, distribution-scope=public, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, name=ubi9, container_name=kepler, config_id=edpm, managed_by=edpm_ansible, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30)
Dec  9 10:44:56 compute-0 podman[237055]: 2025-12-09 10:44:56.924381194 +0000 UTC m=+0.073783146 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  9 10:44:56 compute-0 podman[237056]: 2025-12-09 10:44:56.970079786 +0000 UTC m=+0.124786135 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  9 10:44:59 compute-0 podman[203687]: time="2025-12-09T10:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:44:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 10:44:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4265 "" "Go-http-client/1.1"
Dec  9 10:45:00 compute-0 systemd-logind[806]: New session 29 of user zuul.
Dec  9 10:45:00 compute-0 systemd[1]: Started Session 29 of User zuul.
Dec  9 10:45:01 compute-0 openstack_network_exporter[205823]: ERROR   10:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:45:01 compute-0 openstack_network_exporter[205823]: ERROR   10:45:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:45:01 compute-0 openstack_network_exporter[205823]: ERROR   10:45:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:45:01 compute-0 openstack_network_exporter[205823]: ERROR   10:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:45:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:45:01 compute-0 openstack_network_exporter[205823]: ERROR   10:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:45:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:45:02 compute-0 podman[237243]: 2025-12-09 10:45:02.125294321 +0000 UTC m=+0.094355088 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, version=9.6, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 10:45:02 compute-0 python3[237280]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 10:45:04 compute-0 podman[237484]: 2025-12-09 10:45:04.282720037 +0000 UTC m=+0.179464637 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  9 10:45:04 compute-0 python3[237528]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 10:45:05 compute-0 python3[237691]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")#012journalctl -t "nova_compute" --no-pager -S "${tstamp}"#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 10:45:06 compute-0 podman[237694]: 2025-12-09 10:45:06.929943418 +0000 UTC m=+0.083066953 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 10:45:08 compute-0 python3[237865]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Dec  9 10:45:09 compute-0 python3[238020]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Dec  9 10:45:11 compute-0 python3[238245]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 10:45:12 compute-0 python3[238409]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 10:45:14 compute-0 podman[238449]: 2025-12-09 10:45:14.93447046 +0000 UTC m=+0.088004151 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 10:45:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:45:16.973 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:45:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:45:16.974 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:45:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:45:16.974 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:45:18 compute-0 podman[238467]: 2025-12-09 10:45:18.921859088 +0000 UTC m=+0.078282661 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 10:45:19 compute-0 nova_compute[189493]: 2025-12-09 10:45:19.880 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:45:19 compute-0 nova_compute[189493]: 2025-12-09 10:45:19.881 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.866 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.867 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:45:21 compute-0 nova_compute[189493]: 2025-12-09 10:45:21.868 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.861 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.861 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.862 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.862 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.895 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.896 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.896 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:45:22 compute-0 nova_compute[189493]: 2025-12-09 10:45:22.896 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:45:22 compute-0 podman[238490]: 2025-12-09 10:45:22.949636258 +0000 UTC m=+0.100935660 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  9 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.234 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.236 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5661MB free_disk=72.23666000366211GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.236 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.236 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.316 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.317 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.346 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.360 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.361 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:45:23 compute-0 nova_compute[189493]: 2025-12-09 10:45:23.361 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:45:24 compute-0 nova_compute[189493]: 2025-12-09 10:45:24.341 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:45:25 compute-0 podman[238509]: 2025-12-09 10:45:25.915277292 +0000 UTC m=+0.071039178 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-container, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  9 10:45:27 compute-0 podman[238528]: 2025-12-09 10:45:27.932831335 +0000 UTC m=+0.088205297 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec  9 10:45:27 compute-0 podman[238529]: 2025-12-09 10:45:27.951873614 +0000 UTC m=+0.095720105 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Dec  9 10:45:29 compute-0 podman[203687]: time="2025-12-09T10:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:45:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 10:45:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4267 "" "Go-http-client/1.1"
Dec  9 10:45:31 compute-0 openstack_network_exporter[205823]: ERROR   10:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:45:31 compute-0 openstack_network_exporter[205823]: ERROR   10:45:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:45:31 compute-0 openstack_network_exporter[205823]: ERROR   10:45:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:45:31 compute-0 openstack_network_exporter[205823]: ERROR   10:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:45:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:45:31 compute-0 openstack_network_exporter[205823]: ERROR   10:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:45:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:45:32 compute-0 podman[238562]: 2025-12-09 10:45:32.962523946 +0000 UTC m=+0.105940771 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1755695350, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-type=git)
Dec  9 10:45:34 compute-0 podman[238583]: 2025-12-09 10:45:34.962796157 +0000 UTC m=+0.124693151 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  9 10:45:37 compute-0 podman[238609]: 2025-12-09 10:45:37.905315518 +0000 UTC m=+0.059439526 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 10:45:45 compute-0 podman[238635]: 2025-12-09 10:45:45.912171064 +0000 UTC m=+0.073972781 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec  9 10:45:49 compute-0 podman[238656]: 2025-12-09 10:45:49.935629485 +0000 UTC m=+0.088716181 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 10:45:53 compute-0 podman[238679]: 2025-12-09 10:45:53.928389651 +0000 UTC m=+0.081016086 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  9 10:45:56 compute-0 podman[238698]: 2025-12-09 10:45:56.942837443 +0000 UTC m=+0.095141559 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  9 10:45:58 compute-0 podman[238719]: 2025-12-09 10:45:58.970579896 +0000 UTC m=+0.120692585 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  9 10:45:58 compute-0 podman[238718]: 2025-12-09 10:45:58.997813773 +0000 UTC m=+0.139788883 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  9 10:45:59 compute-0 podman[203687]: time="2025-12-09T10:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:45:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 10:45:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4264 "" "Go-http-client/1.1"
Dec  9 10:46:01 compute-0 openstack_network_exporter[205823]: ERROR   10:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:46:01 compute-0 openstack_network_exporter[205823]: ERROR   10:46:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:46:01 compute-0 openstack_network_exporter[205823]: ERROR   10:46:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:46:01 compute-0 openstack_network_exporter[205823]: ERROR   10:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:46:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:46:01 compute-0 openstack_network_exporter[205823]: ERROR   10:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:46:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:46:03 compute-0 podman[238756]: 2025-12-09 10:46:03.998638306 +0000 UTC m=+0.134382511 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, managed_by=edpm_ansible, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350)
Dec  9 10:46:06 compute-0 podman[238777]: 2025-12-09 10:46:06.006876215 +0000 UTC m=+0.149133086 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 10:46:08 compute-0 podman[238804]: 2025-12-09 10:46:08.947327651 +0000 UTC m=+0.094206990 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 10:46:12 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Dec  9 10:46:12 compute-0 systemd[1]: session-29.scope: Consumed 10.092s CPU time.
Dec  9 10:46:12 compute-0 systemd-logind[806]: Session 29 logged out. Waiting for processes to exit.
Dec  9 10:46:12 compute-0 systemd-logind[806]: Removed session 29.
Dec  9 10:46:16 compute-0 podman[238829]: 2025-12-09 10:46:16.943333212 +0000 UTC m=+0.104353877 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202)
Dec  9 10:46:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:46:16.974 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:46:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:46:16.975 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:46:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:46:16.975 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:46:18 compute-0 nova_compute[189493]: 2025-12-09 10:46:18.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:46:20 compute-0 nova_compute[189493]: 2025-12-09 10:46:20.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:46:20 compute-0 podman[238848]: 2025-12-09 10:46:20.908181518 +0000 UTC m=+0.065767441 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.990 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.991 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.992 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:46:22 compute-0 nova_compute[189493]: 2025-12-09 10:46:22.993 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.286 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.287 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.287 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.293 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.301 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.302 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.306 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.307 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.308 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:46:23.309 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.405 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.406 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5712MB free_disk=72.23675155639648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.406 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.406 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.554 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.555 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.583 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.676 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.679 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:46:23 compute-0 nova_compute[189493]: 2025-12-09 10:46:23.679 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.273s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.681 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.681 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.682 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.696 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.697 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.697 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.697 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:46:24 compute-0 nova_compute[189493]: 2025-12-09 10:46:24.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:46:24 compute-0 podman[238872]: 2025-12-09 10:46:24.958736006 +0000 UTC m=+0.115915071 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  9 10:46:27 compute-0 podman[238890]: 2025-12-09 10:46:27.962251777 +0000 UTC m=+0.114013828 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=)
Dec  9 10:46:29 compute-0 podman[203687]: time="2025-12-09T10:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:46:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 10:46:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4268 "" "Go-http-client/1.1"
Dec  9 10:46:29 compute-0 podman[238910]: 2025-12-09 10:46:29.948979051 +0000 UTC m=+0.097268476 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2)
Dec  9 10:46:29 compute-0 podman[238909]: 2025-12-09 10:46:29.952599084 +0000 UTC m=+0.103286976 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec  9 10:46:31 compute-0 openstack_network_exporter[205823]: ERROR   10:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:46:31 compute-0 openstack_network_exporter[205823]: ERROR   10:46:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:46:31 compute-0 openstack_network_exporter[205823]: ERROR   10:46:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:46:31 compute-0 openstack_network_exporter[205823]: ERROR   10:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:46:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:46:31 compute-0 openstack_network_exporter[205823]: ERROR   10:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:46:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:46:34 compute-0 podman[238949]: 2025-12-09 10:46:34.956874192 +0000 UTC m=+0.104229312 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, architecture=x86_64, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec  9 10:46:36 compute-0 podman[238969]: 2025-12-09 10:46:36.99132506 +0000 UTC m=+0.139218987 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  9 10:46:39 compute-0 podman[238994]: 2025-12-09 10:46:39.950612056 +0000 UTC m=+0.102768321 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 10:46:47 compute-0 podman[239020]: 2025-12-09 10:46:47.958209331 +0000 UTC m=+0.099307265 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  9 10:46:51 compute-0 podman[239041]: 2025-12-09 10:46:51.929323734 +0000 UTC m=+0.087590625 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 10:46:55 compute-0 podman[239067]: 2025-12-09 10:46:55.941941664 +0000 UTC m=+0.096495095 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  9 10:46:58 compute-0 podman[239087]: 2025-12-09 10:46:58.985176992 +0000 UTC m=+0.131771797 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, managed_by=edpm_ansible, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 10:46:59 compute-0 podman[203687]: time="2025-12-09T10:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:46:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 10:46:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4272 "" "Go-http-client/1.1"
Dec  9 10:47:00 compute-0 podman[239106]: 2025-12-09 10:47:00.952457739 +0000 UTC m=+0.087627825 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  9 10:47:00 compute-0 podman[239105]: 2025-12-09 10:47:00.971170476 +0000 UTC m=+0.105075366 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  9 10:47:01 compute-0 openstack_network_exporter[205823]: ERROR   10:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:47:01 compute-0 openstack_network_exporter[205823]: ERROR   10:47:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:47:01 compute-0 openstack_network_exporter[205823]: ERROR   10:47:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:47:01 compute-0 openstack_network_exporter[205823]: ERROR   10:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:47:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:47:01 compute-0 openstack_network_exporter[205823]: ERROR   10:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:47:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:47:05 compute-0 podman[239143]: 2025-12-09 10:47:05.974344485 +0000 UTC m=+0.111226299 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, release=1755695350, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=)
Dec  9 10:47:08 compute-0 podman[239164]: 2025-12-09 10:47:08.034489594 +0000 UTC m=+0.174129208 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Dec  9 10:47:10 compute-0 podman[239192]: 2025-12-09 10:47:10.953101497 +0000 UTC m=+0.110453997 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 10:47:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:16.975 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:47:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:16.976 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:47:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:16.976 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:47:18 compute-0 podman[239216]: 2025-12-09 10:47:18.941085821 +0000 UTC m=+0.097924615 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  9 10:47:19 compute-0 nova_compute[189493]: 2025-12-09 10:47:19.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:47:21 compute-0 nova_compute[189493]: 2025-12-09 10:47:21.839 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:47:22 compute-0 nova_compute[189493]: 2025-12-09 10:47:22.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:47:22 compute-0 podman[239236]: 2025-12-09 10:47:22.926020462 +0000 UTC m=+0.076340358 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 10:47:22 compute-0 nova_compute[189493]: 2025-12-09 10:47:22.927 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.880 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.882 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.882 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:47:23 compute-0 nova_compute[189493]: 2025-12-09 10:47:23.883 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.334 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.335 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5709MB free_disk=72.23674774169922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.336 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.336 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.425 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.426 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.450 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.471 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.472 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:47:24 compute-0 nova_compute[189493]: 2025-12-09 10:47:24.472 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.136s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.472 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.472 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.473 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.844 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.863 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 10:47:25 compute-0 nova_compute[189493]: 2025-12-09 10:47:25.864 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:47:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:26.415 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 10:47:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:26.417 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  9 10:47:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:47:26.418 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:47:26 compute-0 podman[239259]: 2025-12-09 10:47:26.992397905 +0000 UTC m=+0.141363088 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  9 10:47:29 compute-0 podman[203687]: time="2025-12-09T10:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:47:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 10:47:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4264 "" "Go-http-client/1.1"
Dec  9 10:47:29 compute-0 podman[239278]: 2025-12-09 10:47:29.950664563 +0000 UTC m=+0.104371166 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., managed_by=edpm_ansible, release-0.7.12=, vcs-type=git, architecture=x86_64, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, name=ubi9, config_id=edpm, container_name=kepler, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Dec  9 10:47:31 compute-0 openstack_network_exporter[205823]: ERROR   10:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:47:31 compute-0 openstack_network_exporter[205823]: ERROR   10:47:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:47:31 compute-0 openstack_network_exporter[205823]: ERROR   10:47:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:47:31 compute-0 openstack_network_exporter[205823]: ERROR   10:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:47:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:47:31 compute-0 openstack_network_exporter[205823]: ERROR   10:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:47:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:47:31 compute-0 podman[239297]: 2025-12-09 10:47:31.97985011 +0000 UTC m=+0.127733223 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  9 10:47:31 compute-0 podman[239298]: 2025-12-09 10:47:31.981381834 +0000 UTC m=+0.120924663 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  9 10:47:36 compute-0 podman[239333]: 2025-12-09 10:47:36.960220679 +0000 UTC m=+0.116744695 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, release=1755695350, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., config_id=edpm, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal)
Dec  9 10:47:38 compute-0 podman[239356]: 2025-12-09 10:47:38.461278753 +0000 UTC m=+0.127406626 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec  9 10:47:41 compute-0 podman[239380]: 2025-12-09 10:47:41.99189696 +0000 UTC m=+0.130901173 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 10:47:49 compute-0 podman[239404]: 2025-12-09 10:47:49.985187784 +0000 UTC m=+0.126042327 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec  9 10:47:53 compute-0 podman[239424]: 2025-12-09 10:47:53.991738234 +0000 UTC m=+0.134340390 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 10:47:57 compute-0 podman[239447]: 2025-12-09 10:47:57.9665182 +0000 UTC m=+0.124227997 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  9 10:47:59 compute-0 podman[203687]: time="2025-12-09T10:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:47:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 10:47:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4275 "" "Go-http-client/1.1"
Dec  9 10:48:00 compute-0 podman[239467]: 2025-12-09 10:48:00.910064353 +0000 UTC m=+0.068047875 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release-0.7.12=, vcs-type=git, config_id=edpm, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release=1214.1726694543, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, name=ubi9, architecture=x86_64, io.openshift.expose-services=, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9)
Dec  9 10:48:01 compute-0 openstack_network_exporter[205823]: ERROR   10:48:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:48:01 compute-0 openstack_network_exporter[205823]: ERROR   10:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:48:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:48:01 compute-0 openstack_network_exporter[205823]: ERROR   10:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:48:01 compute-0 openstack_network_exporter[205823]: ERROR   10:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:48:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:48:01 compute-0 openstack_network_exporter[205823]: ERROR   10:48:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:48:02 compute-0 podman[239487]: 2025-12-09 10:48:02.974430012 +0000 UTC m=+0.114413679 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 10:48:02 compute-0 podman[239488]: 2025-12-09 10:48:02.993367205 +0000 UTC m=+0.125976875 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  9 10:48:07 compute-0 podman[239527]: 2025-12-09 10:48:07.409091665 +0000 UTC m=+0.098661327 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, config_id=edpm, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, architecture=x86_64, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 10:48:09 compute-0 podman[239547]: 2025-12-09 10:48:09.016966653 +0000 UTC m=+0.165657981 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  9 10:48:12 compute-0 podman[239572]: 2025-12-09 10:48:12.996400125 +0000 UTC m=+0.139659992 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 10:48:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:16.977 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:16.977 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:16.977 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:20 compute-0 podman[239595]: 2025-12-09 10:48:20.902248844 +0000 UTC m=+0.060730809 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec  9 10:48:21 compute-0 nova_compute[189493]: 2025-12-09 10:48:21.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:48:22 compute-0 nova_compute[189493]: 2025-12-09 10:48:22.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:48:22 compute-0 nova_compute[189493]: 2025-12-09 10:48:22.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.287 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.288 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.288 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.290 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.292 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.291 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.292 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.294 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.296 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.297 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.298 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.299 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.300 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:48:23.302 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.877 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.879 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.879 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:24 compute-0 nova_compute[189493]: 2025-12-09 10:48:24.880 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:48:24 compute-0 podman[239616]: 2025-12-09 10:48:24.96483101 +0000 UTC m=+0.098448569 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.258 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.260 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5692MB free_disk=72.23674774169922GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.260 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.261 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.342 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.343 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.384 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.396 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.398 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:48:25 compute-0 nova_compute[189493]: 2025-12-09 10:48:25.398 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.138s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.398 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.398 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.398 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.419 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.421 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.421 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:48:27 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:27.496 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 10:48:27 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:27.498 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  9 10:48:27 compute-0 nova_compute[189493]: 2025-12-09 10:48:27.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:48:28 compute-0 podman[239638]: 2025-12-09 10:48:28.959328291 +0000 UTC m=+0.096781993 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec  9 10:48:29 compute-0 podman[203687]: time="2025-12-09T10:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:48:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 10:48:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4273 "" "Go-http-client/1.1"
Dec  9 10:48:31 compute-0 openstack_network_exporter[205823]: ERROR   10:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:48:31 compute-0 openstack_network_exporter[205823]: ERROR   10:48:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:48:31 compute-0 openstack_network_exporter[205823]: ERROR   10:48:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:48:31 compute-0 openstack_network_exporter[205823]: ERROR   10:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:48:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:48:31 compute-0 openstack_network_exporter[205823]: ERROR   10:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:48:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:48:31 compute-0 podman[239657]: 2025-12-09 10:48:31.95005747 +0000 UTC m=+0.102072773 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 10:48:33 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:33.500 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:48:33 compute-0 podman[239677]: 2025-12-09 10:48:33.964878751 +0000 UTC m=+0.102152646 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.build-date=20251125)
Dec  9 10:48:33 compute-0 podman[239676]: 2025-12-09 10:48:33.987405551 +0000 UTC m=+0.126076776 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 10:48:37 compute-0 podman[239718]: 2025-12-09 10:48:37.950388576 +0000 UTC m=+0.104177473 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, release=1755695350, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible)
Dec  9 10:48:40 compute-0 podman[239740]: 2025-12-09 10:48:40.023204765 +0000 UTC m=+0.162813110 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.043 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.044 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.065 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.207 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.209 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.224 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.225 189497 INFO nova.compute.claims [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.349 189497 DEBUG nova.compute.provider_tree [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.370 189497 DEBUG nova.scheduler.client.report [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.388 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.389 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.436 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.437 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.461 189497 INFO nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.501 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.583 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.586 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.587 189497 INFO nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Creating image(s)#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.589 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.590 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.592 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.593 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:41 compute-0 nova_compute[189493]: 2025-12-09 10:48:41.594 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:42 compute-0 nova_compute[189493]: 2025-12-09 10:48:42.559 189497 WARNING oslo_policy.policy [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  9 10:48:42 compute-0 nova_compute[189493]: 2025-12-09 10:48:42.560 189497 WARNING oslo_policy.policy [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.#033[00m
Dec  9 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.035 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.138 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.part --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.140 189497 DEBUG nova.virt.images [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] 53d12211-5d5c-4333-b3ee-e3dcf1663767 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  9 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.143 189497 DEBUG nova.privsep.utils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  9 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.144 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.part /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.633 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.part /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.converted" returned: 0 in 0.490s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.642 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.731 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798.converted --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.734 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.764 189497 INFO oslo.privsep.daemon [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp35j06s_p/privsep.sock']#033[00m
Dec  9 10:48:43 compute-0 nova_compute[189493]: 2025-12-09 10:48:43.966 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Successfully created port: 2c684388-b6d9-4de0-8691-29807fabed2c _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548#033[00m
Dec  9 10:48:43 compute-0 podman[239783]: 2025-12-09 10:48:43.972556383 +0000 UTC m=+0.117143572 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.610 189497 INFO oslo.privsep.daemon [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.389 239806 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.393 239806 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.395 239806 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.395 239806 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239806#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.704 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.763 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.764 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.765 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.783 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.852 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.853 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.923 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk 1073741824" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.924 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.925 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.996 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:44 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.998 189497 DEBUG nova.virt.disk.api [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking if we can resize image /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:44.999 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.057 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.060 189497 DEBUG nova.virt.disk.api [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Cannot resize image /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.061 189497 DEBUG nova.objects.instance [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.497 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.498 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.500 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.501 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.502 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.503 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.552 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.553 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.611 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.613 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.638 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.735 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.737 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.737 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.761 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.821 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Successfully updated port: 2c684388-b6d9-4de0-8691-29807fabed2c _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.827 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.828 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.842 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.843 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.843 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.867 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.869 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.870 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.958 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.959 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.959 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Ensure instance console log exists: /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.960 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.960 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:45 compute-0 nova_compute[189493]: 2025-12-09 10:48:45.961 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:46 compute-0 nova_compute[189493]: 2025-12-09 10:48:46.053 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  9 10:48:46 compute-0 nova_compute[189493]: 2025-12-09 10:48:46.360 189497 DEBUG nova.compute.manager [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-changed-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:48:46 compute-0 nova_compute[189493]: 2025-12-09 10:48:46.361 189497 DEBUG nova.compute.manager [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Refreshing instance network info cache due to event network-changed-2c684388-b6d9-4de0-8691-29807fabed2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  9 10:48:46 compute-0 nova_compute[189493]: 2025-12-09 10:48:46.361 189497 DEBUG oslo_concurrency.lockutils [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.567 189497 DEBUG nova.network.neutron [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.675 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.676 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Instance network_info: |[{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.677 189497 DEBUG oslo_concurrency.lockutils [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.678 189497 DEBUG nova.network.neutron [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Refreshing network info cache for port 2c684388-b6d9-4de0-8691-29807fabed2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.682 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Start _get_guest_xml network_info=[{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'size': 0, 'image_id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.704 189497 WARNING nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.714 189497 DEBUG nova.virt.libvirt.host [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.716 189497 DEBUG nova.virt.libvirt.host [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.722 189497 DEBUG nova.virt.libvirt.host [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.723 189497 DEBUG nova.virt.libvirt.host [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.724 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.724 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-09T10:47:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cf91b364-8467-4d1e-8c92-f7d1fab99905',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.725 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.726 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.726 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.727 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.727 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.727 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.728 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.728 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.729 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.729 189497 DEBUG nova.virt.hardware [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.737 189497 DEBUG nova.privsep.utils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.740 189497 DEBUG nova.virt.libvirt.vif [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:48:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-o83aar8e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:48:41Z,user_data=None,user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.741 189497 DEBUG nova.network.os_vif_util [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.743 189497 DEBUG nova.network.os_vif_util [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.747 189497 DEBUG nova.objects.instance [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.918 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] End _get_guest_xml xml=<domain type="kvm">
Dec  9 10:48:49 compute-0 nova_compute[189493]:  <uuid>41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f</uuid>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  <name>instance-00000001</name>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  <memory>524288</memory>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  <vcpu>1</vcpu>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  <metadata>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <nova:name>test_0</nova:name>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <nova:creationTime>2025-12-09 10:48:49</nova:creationTime>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <nova:flavor name="m1.small">
Dec  9 10:48:49 compute-0 nova_compute[189493]:        <nova:memory>512</nova:memory>
Dec  9 10:48:49 compute-0 nova_compute[189493]:        <nova:disk>1</nova:disk>
Dec  9 10:48:49 compute-0 nova_compute[189493]:        <nova:swap>0</nova:swap>
Dec  9 10:48:49 compute-0 nova_compute[189493]:        <nova:ephemeral>1</nova:ephemeral>
Dec  9 10:48:49 compute-0 nova_compute[189493]:        <nova:vcpus>1</nova:vcpus>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      </nova:flavor>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <nova:owner>
Dec  9 10:48:49 compute-0 nova_compute[189493]:        <nova:user uuid="e6d3a937c2a74eb0816d9f63820935e0">admin</nova:user>
Dec  9 10:48:49 compute-0 nova_compute[189493]:        <nova:project uuid="736bbfddbeea47e3ac9d863ba120b8f2">admin</nova:project>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      </nova:owner>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <nova:root type="image" uuid="53d12211-5d5c-4333-b3ee-e3dcf1663767"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <nova:ports>
Dec  9 10:48:49 compute-0 nova_compute[189493]:        <nova:port uuid="2c684388-b6d9-4de0-8691-29807fabed2c">
Dec  9 10:48:49 compute-0 nova_compute[189493]:          <nova:ip type="fixed" address="192.168.0.250" ipVersion="4"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:        </nova:port>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      </nova:ports>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    </nova:instance>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  </metadata>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  <sysinfo type="smbios">
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <system>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <entry name="manufacturer">RDO</entry>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <entry name="product">OpenStack Compute</entry>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <entry name="serial">41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f</entry>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <entry name="uuid">41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f</entry>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <entry name="family">Virtual Machine</entry>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    </system>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  </sysinfo>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  <os>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <boot dev="hd"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <smbios mode="sysinfo"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  </os>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  <features>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <acpi/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <apic/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <vmcoreinfo/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  </features>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  <clock offset="utc">
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <timer name="pit" tickpolicy="delay"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <timer name="hpet" present="no"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  </clock>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  <cpu mode="host-model" match="exact">
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <topology sockets="1" cores="1" threads="1"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  </cpu>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  <devices>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <disk type="file" device="disk">
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <target dev="vda" bus="virtio"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <disk type="file" device="disk">
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <target dev="vdb" bus="virtio"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <disk type="file" device="cdrom">
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <driver name="qemu" type="raw" cache="none"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.config"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <target dev="sda" bus="sata"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <interface type="ethernet">
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <mac address="fa:16:3e:c7:65:39"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <model type="virtio"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <driver name="vhost" rx_queue_size="512"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <mtu size="1442"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <target dev="tap2c684388-b6"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    </interface>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <serial type="pty">
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <log file="/var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/console.log" append="off"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    </serial>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <video>
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <model type="virtio"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    </video>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <input type="tablet" bus="usb"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <rng model="virtio">
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <backend model="random">/dev/urandom</backend>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    </rng>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <controller type="usb" index="0"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    <memballoon model="virtio">
Dec  9 10:48:49 compute-0 nova_compute[189493]:      <stats period="10"/>
Dec  9 10:48:49 compute-0 nova_compute[189493]:    </memballoon>
Dec  9 10:48:49 compute-0 nova_compute[189493]:  </devices>
Dec  9 10:48:49 compute-0 nova_compute[189493]: </domain>
Dec  9 10:48:49 compute-0 nova_compute[189493]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.921 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Preparing to wait for external event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.922 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.922 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.923 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.924 189497 DEBUG nova.virt.libvirt.vif [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:48:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-o83aar8e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:48:41Z,user_data=None,user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.925 189497 DEBUG nova.network.os_vif_util [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.926 189497 DEBUG nova.network.os_vif_util [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.927 189497 DEBUG os_vif [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.996 189497 DEBUG ovsdbapp.backend.ovs_idl [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.997 189497 DEBUG ovsdbapp.backend.ovs_idl [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.997 189497 DEBUG ovsdbapp.backend.ovs_idl [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.997 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.998 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:49 compute-0 nova_compute[189493]: 2025-12-09 10:48:49.999 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.000 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.002 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.005 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.018 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.018 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.018 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.019 189497 INFO oslo.privsep.daemon [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpq6wvstdd/privsep.sock']#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.828 189497 INFO oslo.privsep.daemon [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Spawned new privsep daemon via rootwrap#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.664 239844 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.675 239844 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.679 239844 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none#033[00m
Dec  9 10:48:50 compute-0 nova_compute[189493]: 2025-12-09 10:48:50.680 239844 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239844#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.173 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.175 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2c684388-b6, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.177 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2c684388-b6, col_values=(('external_ids', {'iface-id': '2c684388-b6d9-4de0-8691-29807fabed2c', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:c7:65:39', 'vm-uuid': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.181 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:51 compute-0 NetworkManager[56302]: <info>  [1765277331.1830] manager: (tap2c684388-b6): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.185 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.194 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.196 189497 INFO os_vif [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6')#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.289 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.290 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.290 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.291 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No VIF found with MAC fa:16:3e:c7:65:39, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.291 189497 INFO nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Using config drive#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.435 189497 DEBUG nova.network.neutron [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated VIF entry in instance network info cache for port 2c684388-b6d9-4de0-8691-29807fabed2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.436 189497 DEBUG nova.network.neutron [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.456 189497 DEBUG oslo_concurrency.lockutils [req-03540681-d93c-4b28-962d-20543abce1fc req-f5b1598e-12af-4775-baf4-aa4e7ed049f4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.896 189497 INFO nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Creating config drive at /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.config#033[00m
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.900 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2sdlf6h execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:48:51 compute-0 podman[239850]: 2025-12-09 10:48:51.972139236 +0000 UTC m=+0.123605205 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  9 10:48:51 compute-0 nova_compute[189493]: 2025-12-09 10:48:51.990 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.041 189497 DEBUG oslo_concurrency.processutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpp2sdlf6h" returned: 0 in 0.140s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:48:52 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Dec  9 10:48:52 compute-0 kernel: tap2c684388-b6: entered promiscuous mode
Dec  9 10:48:52 compute-0 NetworkManager[56302]: <info>  [1765277332.1876] manager: (tap2c684388-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.185 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:52 compute-0 ovn_controller[97780]: 2025-12-09T10:48:52Z|00027|binding|INFO|Claiming lport 2c684388-b6d9-4de0-8691-29807fabed2c for this chassis.
Dec  9 10:48:52 compute-0 ovn_controller[97780]: 2025-12-09T10:48:52Z|00028|binding|INFO|2c684388-b6d9-4de0-8691-29807fabed2c: Claiming fa:16:3e:c7:65:39 192.168.0.250
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.200 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.209 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:65:39 192.168.0.250'], port_security=['fa:16:3e:c7:65:39 192.168.0.250'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.250/24', 'neutron:device_id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=2c684388-b6d9-4de0-8691-29807fabed2c) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.212 106644 INFO neutron.agent.ovn.metadata.agent [-] Port 2c684388-b6d9-4de0-8691-29807fabed2c in datapath c5af7354-5afe-400a-9e13-5500648117d8 bound to our chassis#033[00m
Dec  9 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.215 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8#033[00m
Dec  9 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.217 106644 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp704nukzt/privsep.sock']#033[00m
Dec  9 10:48:52 compute-0 systemd-udevd[239891]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 10:48:52 compute-0 NetworkManager[56302]: <info>  [1765277332.2628] device (tap2c684388-b6): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  9 10:48:52 compute-0 NetworkManager[56302]: <info>  [1765277332.2636] device (tap2c684388-b6): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.293 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:52 compute-0 systemd-machined[155790]: New machine qemu-1-instance-00000001.
Dec  9 10:48:52 compute-0 ovn_controller[97780]: 2025-12-09T10:48:52Z|00029|binding|INFO|Setting lport 2c684388-b6d9-4de0-8691-29807fabed2c ovn-installed in OVS
Dec  9 10:48:52 compute-0 ovn_controller[97780]: 2025-12-09T10:48:52Z|00030|binding|INFO|Setting lport 2c684388-b6d9-4de0-8691-29807fabed2c up in Southbound
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.303 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:52 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.565 189497 DEBUG nova.compute.manager [req-a8ffcb00-2077-4a5b-a3c9-79ee190a0feb req-fb701220-8ad7-4e18-9f71-7c09c7e06739 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.566 189497 DEBUG oslo_concurrency.lockutils [req-a8ffcb00-2077-4a5b-a3c9-79ee190a0feb req-fb701220-8ad7-4e18-9f71-7c09c7e06739 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.569 189497 DEBUG oslo_concurrency.lockutils [req-a8ffcb00-2077-4a5b-a3c9-79ee190a0feb req-fb701220-8ad7-4e18-9f71-7c09c7e06739 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.572 189497 DEBUG oslo_concurrency.lockutils [req-a8ffcb00-2077-4a5b-a3c9-79ee190a0feb req-fb701220-8ad7-4e18-9f71-7c09c7e06739 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.572 189497 DEBUG nova.compute.manager [req-a8ffcb00-2077-4a5b-a3c9-79ee190a0feb req-fb701220-8ad7-4e18-9f71-7c09c7e06739 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Processing event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.669 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277332.6679087, 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.670 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] VM Started (Lifecycle Event)#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.674 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.682 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.695 189497 INFO nova.virt.libvirt.driver [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Instance spawned successfully.#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.695 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  9 10:48:52 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  9 10:48:52 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.828 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.838 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.878 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.879 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277332.668131, 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.879 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] VM Paused (Lifecycle Event)#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.903 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.910 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277332.6776702, 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.910 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] VM Resumed (Lifecycle Event)#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.946 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.952 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.953 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.953 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.954 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.954 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.955 189497 DEBUG nova.virt.libvirt.driver [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.960 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  9 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.981 106644 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  9 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.982 106644 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp704nukzt/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  9 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.820 239934 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  9 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.827 239934 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  9 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.832 239934 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none#033[00m
Dec  9 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.832 239934 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239934#033[00m
Dec  9 10:48:52 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:52.986 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[8e0ba057-9def-41c7-ad8b-ed5b33718807]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:52 compute-0 nova_compute[189493]: 2025-12-09 10:48:52.993 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  9 10:48:53 compute-0 nova_compute[189493]: 2025-12-09 10:48:53.023 189497 INFO nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Took 11.44 seconds to spawn the instance on the hypervisor.#033[00m
Dec  9 10:48:53 compute-0 nova_compute[189493]: 2025-12-09 10:48:53.026 189497 DEBUG nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:48:53 compute-0 nova_compute[189493]: 2025-12-09 10:48:53.148 189497 INFO nova.compute.manager [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Took 11.99 seconds to build instance.#033[00m
Dec  9 10:48:53 compute-0 nova_compute[189493]: 2025-12-09 10:48:53.197 189497 DEBUG oslo_concurrency.lockutils [None req-fc62c094-7fe7-4e57-9a4f-37933da4c2bf e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 12.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:53 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:53.491 239934 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:53 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:53.491 239934 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:53 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:53.491 239934 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.046 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[6aa791f0-3fa1-447b-8c19-64a5934562a8]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.048 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapc5af7354-51 in ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.050 239934 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapc5af7354-50 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.050 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[bb0127dd-bc74-4e0d-9be7-14f3b039a7ed]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.053 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[44b95edd-e192-407b-b4f2-eff4c8961521]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.089 106757 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f681de-83c4-4ce4-b739-26b63f364715]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.115 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[a73158ad-fe9b-44c0-96d1-8c957f8b4a79]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.119 106644 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmp89wfnesc/privsep.sock']#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.813 106644 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.814 106644 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp89wfnesc/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.671 239949 INFO oslo.privsep.daemon [-] privsep daemon starting#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.679 239949 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.684 239949 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.684 239949 INFO oslo.privsep.daemon [-] privsep daemon running as pid 239949#033[00m
Dec  9 10:48:54 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:54.817 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[ec24217d-6a5d-404e-a8f2-1ee21776bd5a]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.825 189497 DEBUG nova.compute.manager [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.825 189497 DEBUG oslo_concurrency.lockutils [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.826 189497 DEBUG oslo_concurrency.lockutils [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.826 189497 DEBUG oslo_concurrency.lockutils [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.826 189497 DEBUG nova.compute.manager [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] No waiting events found dispatching network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 10:48:54 compute-0 nova_compute[189493]: 2025-12-09 10:48:54.827 189497 WARNING nova.compute.manager [req-bd69f214-75ff-4354-b4a5-c1802e8485a7 req-f205e711-e80c-4902-96d8-8c1e43a8fbeb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received unexpected event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c for instance with vm_state active and task_state None.#033[00m
Dec  9 10:48:55 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:55.317 239949 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:48:55 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:55.317 239949 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:48:55 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:55.317 239949 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:48:55 compute-0 podman[239954]: 2025-12-09 10:48:55.952745743 +0000 UTC m=+0.104571755 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 10:48:55 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:55.954 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[17701ff2-f5e5-4622-a71b-abcd6ef8131a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:55 compute-0 NetworkManager[56302]: <info>  [1765277335.9931] manager: (tapc5af7354-50): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Dec  9 10:48:55 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:55.991 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[432ae212-fe6d-4065-b8d8-80ac9f4f4fed]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:56 compute-0 systemd-udevd[239983]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.029 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[8feb6f7e-b999-42fc-b6b9-df6572006093]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.036 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[f8731ff5-1e4b-4f58-bfc8-fed1db4a0f1f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:56 compute-0 NetworkManager[56302]: <info>  [1765277336.0722] device (tapc5af7354-50): carrier: link connected
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.079 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[99314e4f-7bca-4745-8ed7-37e2ca1d84b0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.106 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[cc6c3bb9-1b1c-4507-9967-f63d7fa797be]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 28193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240001, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.130 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[5a9372a0-fd19-4eda-815f-9684ee6826d2]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:febf:da0'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396027, 'tstamp': 396027}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240002, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.151 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[24b72489-153a-4665-8930-8707d83d15e7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 180, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 28193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 152, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 152, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240003, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.184 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.192 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[401bfbeb-9d31-45f2-9d67-93de0af399bc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.267 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[cc69b384-89cb-4846-b519-92b31c8fac7c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.270 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.272 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.273 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.276 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:56 compute-0 NetworkManager[56302]: <info>  [1765277336.2773] manager: (tapc5af7354-50): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Dec  9 10:48:56 compute-0 kernel: tapc5af7354-50: entered promiscuous mode
Dec  9 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.283 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.286 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.289 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:56 compute-0 ovn_controller[97780]: 2025-12-09T10:48:56Z|00031|binding|INFO|Releasing lport 3eb47070-bc26-4827-a5a8-68152f05129c from this chassis (sb_readonly=0)
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.294 106644 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/c5af7354-5afe-400a-9e13-5500648117d8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/c5af7354-5afe-400a-9e13-5500648117d8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.296 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[6650579f-f3fe-4a26-bfc0-6ecd4a591320]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.297 106644 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: global
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    log         /dev/log local0 debug
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    log-tag     haproxy-metadata-proxy-c5af7354-5afe-400a-9e13-5500648117d8
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    user        root
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    group       root
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    maxconn     1024
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    pidfile     /var/lib/neutron/external/pids/c5af7354-5afe-400a-9e13-5500648117d8.pid.haproxy
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    daemon
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: defaults
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    log global
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    mode http
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    option httplog
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    option dontlognull
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    option http-server-close
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    option forwardfor
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    retries                 3
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    timeout http-request    30s
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    timeout connect         30s
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    timeout client          32s
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    timeout server          32s
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    timeout http-keep-alive 30s
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: listen listener
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    bind 169.254.169.254:80
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    server metadata /var/lib/neutron/metadata_proxy
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]:    http-request add-header X-OVN-Network-ID c5af7354-5afe-400a-9e13-5500648117d8
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107#033[00m
Dec  9 10:48:56 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:48:56.301 106644 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'env', 'PROCESS_TAG=haproxy-c5af7354-5afe-400a-9e13-5500648117d8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/c5af7354-5afe-400a-9e13-5500648117d8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84#033[00m
Dec  9 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.321 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:56 compute-0 podman[240035]: 2025-12-09 10:48:56.855150642 +0000 UTC m=+0.095904598 container create c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec  9 10:48:56 compute-0 podman[240035]: 2025-12-09 10:48:56.805335376 +0000 UTC m=+0.046089382 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Dec  9 10:48:56 compute-0 systemd[1]: Started libpod-conmon-c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157.scope.
Dec  9 10:48:56 compute-0 systemd[1]: Started libcrun container.
Dec  9 10:48:56 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2b24b7ae1cc8b90219deedb86d3b48361a8607a5826e7fa3b48e4b1d97a56504/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Dec  9 10:48:56 compute-0 podman[240035]: 2025-12-09 10:48:56.983944404 +0000 UTC m=+0.224698350 container init c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  9 10:48:56 compute-0 nova_compute[189493]: 2025-12-09 10:48:56.992 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:48:56 compute-0 podman[240035]: 2025-12-09 10:48:56.998319543 +0000 UTC m=+0.239073469 container start c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec  9 10:48:57 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [NOTICE]   (240052) : New worker (240054) forked
Dec  9 10:48:57 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [NOTICE]   (240052) : Loading success.
Dec  9 10:48:59 compute-0 podman[203687]: time="2025-12-09T10:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:48:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:48:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4752 "" "Go-http-client/1.1"
Dec  9 10:48:59 compute-0 podman[240063]: 2025-12-09 10:48:59.958743781 +0000 UTC m=+0.103384450 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  9 10:49:01 compute-0 nova_compute[189493]: 2025-12-09 10:49:01.190 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:01 compute-0 openstack_network_exporter[205823]: ERROR   10:49:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:49:01 compute-0 openstack_network_exporter[205823]: ERROR   10:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:49:01 compute-0 openstack_network_exporter[205823]: ERROR   10:49:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:49:01 compute-0 openstack_network_exporter[205823]: ERROR   10:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:49:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:49:01 compute-0 openstack_network_exporter[205823]: ERROR   10:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:49:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:49:01 compute-0 nova_compute[189493]: 2025-12-09 10:49:01.995 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:02 compute-0 podman[240083]: 2025-12-09 10:49:02.184561071 +0000 UTC m=+0.151970172 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, container_name=kepler, vendor=Red Hat, Inc., release=1214.1726694543, io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=edpm, vcs-type=git, io.openshift.tags=base rhel9, architecture=x86_64, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  9 10:49:04 compute-0 podman[240105]: 2025-12-09 10:49:04.942989776 +0000 UTC m=+0.087387516 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  9 10:49:04 compute-0 podman[240104]: 2025-12-09 10:49:04.959746862 +0000 UTC m=+0.110562805 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Dec  9 10:49:06 compute-0 nova_compute[189493]: 2025-12-09 10:49:06.193 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:06 compute-0 nova_compute[189493]: 2025-12-09 10:49:06.996 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7265] manager: (patch-br-int-to-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Dec  9 10:49:07 compute-0 nova_compute[189493]: 2025-12-09 10:49:07.726 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7273] device (patch-br-int-to-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 10:49:07 compute-0 NetworkManager[56302]: <warn>  [1765277347.7275] device (patch-br-int-to-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec  9 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7282] manager: (patch-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Dec  9 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7285] device (patch-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Dec  9 10:49:07 compute-0 NetworkManager[56302]: <warn>  [1765277347.7286] device (patch-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Dec  9 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7291] manager: (patch-br-int-to-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Dec  9 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7295] manager: (patch-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Dec  9 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7340] device (patch-br-int-to-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  9 10:49:07 compute-0 NetworkManager[56302]: <info>  [1765277347.7343] device (patch-provnet-9be5cd6f-7eb0-4077-8aaa-b6b8be023b73-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Dec  9 10:49:07 compute-0 ovn_controller[97780]: 2025-12-09T10:49:07Z|00032|binding|INFO|Releasing lport 3eb47070-bc26-4827-a5a8-68152f05129c from this chassis (sb_readonly=0)
Dec  9 10:49:07 compute-0 ovn_controller[97780]: 2025-12-09T10:49:07Z|00033|binding|INFO|Releasing lport 3eb47070-bc26-4827-a5a8-68152f05129c from this chassis (sb_readonly=0)
Dec  9 10:49:07 compute-0 nova_compute[189493]: 2025-12-09 10:49:07.776 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:07 compute-0 nova_compute[189493]: 2025-12-09 10:49:07.786 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:08 compute-0 nova_compute[189493]: 2025-12-09 10:49:08.576 189497 DEBUG nova.compute.manager [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-changed-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:49:08 compute-0 nova_compute[189493]: 2025-12-09 10:49:08.577 189497 DEBUG nova.compute.manager [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Refreshing instance network info cache due to event network-changed-2c684388-b6d9-4de0-8691-29807fabed2c. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  9 10:49:08 compute-0 nova_compute[189493]: 2025-12-09 10:49:08.578 189497 DEBUG oslo_concurrency.lockutils [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:49:08 compute-0 nova_compute[189493]: 2025-12-09 10:49:08.579 189497 DEBUG oslo_concurrency.lockutils [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:49:08 compute-0 nova_compute[189493]: 2025-12-09 10:49:08.580 189497 DEBUG nova.network.neutron [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Refreshing network info cache for port 2c684388-b6d9-4de0-8691-29807fabed2c _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  9 10:49:08 compute-0 podman[240144]: 2025-12-09 10:49:08.977627448 +0000 UTC m=+0.138607612 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_id=edpm, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, vcs-type=git, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 10:49:10 compute-0 nova_compute[189493]: 2025-12-09 10:49:10.850 189497 DEBUG nova.network.neutron [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated VIF entry in instance network info cache for port 2c684388-b6d9-4de0-8691-29807fabed2c. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  9 10:49:10 compute-0 nova_compute[189493]: 2025-12-09 10:49:10.850 189497 DEBUG nova.network.neutron [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:49:10 compute-0 nova_compute[189493]: 2025-12-09 10:49:10.873 189497 DEBUG oslo_concurrency.lockutils [req-610a9a60-b8a4-42de-a3ed-7dc2808b2037 req-fe3e58d9-02df-47de-bc73-340568df1ff3 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:49:10 compute-0 podman[240163]: 2025-12-09 10:49:10.990844133 +0000 UTC m=+0.134485915 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  9 10:49:11 compute-0 nova_compute[189493]: 2025-12-09 10:49:11.196 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:12 compute-0 nova_compute[189493]: 2025-12-09 10:49:11.999 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:14 compute-0 podman[240187]: 2025-12-09 10:49:14.795402224 +0000 UTC m=+0.114545159 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 10:49:16 compute-0 nova_compute[189493]: 2025-12-09 10:49:16.201 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:49:16.977 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:49:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:49:16.978 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:49:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:49:16.979 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:49:17 compute-0 nova_compute[189493]: 2025-12-09 10:49:17.002 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:18 compute-0 nova_compute[189493]: 2025-12-09 10:49:18.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:18 compute-0 nova_compute[189493]: 2025-12-09 10:49:18.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  9 10:49:21 compute-0 nova_compute[189493]: 2025-12-09 10:49:21.208 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:22 compute-0 nova_compute[189493]: 2025-12-09 10:49:22.005 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:23 compute-0 podman[240211]: 2025-12-09 10:49:23.001047877 +0000 UTC m=+0.146480446 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  9 10:49:23 compute-0 nova_compute[189493]: 2025-12-09 10:49:23.889 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:23 compute-0 nova_compute[189493]: 2025-12-09 10:49:23.890 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:23 compute-0 nova_compute[189493]: 2025-12-09 10:49:23.890 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:24 compute-0 nova_compute[189493]: 2025-12-09 10:49:24.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:24 compute-0 nova_compute[189493]: 2025-12-09 10:49:24.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.858 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.859 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.891 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.891 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.891 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:49:25 compute-0 nova_compute[189493]: 2025-12-09 10:49:25.891 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.015 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.089 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.090 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.147 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.149 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.215 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.218 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.219 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.299 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.686 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.688 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5253MB free_disk=72.20544052124023GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.688 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.688 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:49:26 compute-0 podman[240247]: 2025-12-09 10:49:26.951953468 +0000 UTC m=+0.085875912 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.978 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.979 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:49:26 compute-0 nova_compute[189493]: 2025-12-09 10:49:26.979 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.009 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.079 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.159 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.159 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.176 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.199 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.240 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  9 10:49:27 compute-0 ovn_controller[97780]: 2025-12-09T10:49:27Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:c7:65:39 192.168.0.250
Dec  9 10:49:27 compute-0 ovn_controller[97780]: 2025-12-09T10:49:27Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:c7:65:39 192.168.0.250
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.725 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updated inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957#033[00m
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.726 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164#033[00m
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.727 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.992 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:49:27 compute-0 nova_compute[189493]: 2025-12-09 10:49:27.993 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.304s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:49:28 compute-0 nova_compute[189493]: 2025-12-09 10:49:28.972 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.132 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.133 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.134 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.612 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.613 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.613 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 10:49:29 compute-0 nova_compute[189493]: 2025-12-09 10:49:29.614 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:49:29 compute-0 podman[203687]: time="2025-12-09T10:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:49:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:49:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4763 "" "Go-http-client/1.1"
Dec  9 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.881 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.905 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.906 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.907 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.907 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.908 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.908 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.909 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  9 10:49:30 compute-0 nova_compute[189493]: 2025-12-09 10:49:30.924 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  9 10:49:30 compute-0 podman[240284]: 2025-12-09 10:49:30.931068983 +0000 UTC m=+0.088243880 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  9 10:49:31 compute-0 nova_compute[189493]: 2025-12-09 10:49:31.221 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:31 compute-0 openstack_network_exporter[205823]: ERROR   10:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:49:31 compute-0 openstack_network_exporter[205823]: ERROR   10:49:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:49:31 compute-0 openstack_network_exporter[205823]: ERROR   10:49:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:49:31 compute-0 openstack_network_exporter[205823]: ERROR   10:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:49:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:49:31 compute-0 openstack_network_exporter[205823]: ERROR   10:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:49:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:49:32 compute-0 nova_compute[189493]: 2025-12-09 10:49:32.013 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:32 compute-0 podman[240302]: 2025-12-09 10:49:32.942040414 +0000 UTC m=+0.097126203 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm)
Dec  9 10:49:35 compute-0 podman[240324]: 2025-12-09 10:49:35.8450649 +0000 UTC m=+0.087529310 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute)
Dec  9 10:49:35 compute-0 podman[240323]: 2025-12-09 10:49:35.857640637 +0000 UTC m=+0.112025097 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  9 10:49:36 compute-0 nova_compute[189493]: 2025-12-09 10:49:36.226 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:37 compute-0 nova_compute[189493]: 2025-12-09 10:49:37.018 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:37 compute-0 ovn_controller[97780]: 2025-12-09T10:49:37Z|00034|memory_trim|INFO|Detected inactivity (last active 30007 ms ago): trimming memory
Dec  9 10:49:39 compute-0 podman[240361]: 2025-12-09 10:49:39.945987766 +0000 UTC m=+0.098417509 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, version=9.6, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec  9 10:49:41 compute-0 nova_compute[189493]: 2025-12-09 10:49:41.230 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:42 compute-0 nova_compute[189493]: 2025-12-09 10:49:42.022 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:42 compute-0 podman[240382]: 2025-12-09 10:49:42.067016246 +0000 UTC m=+0.214063228 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 10:49:44 compute-0 podman[240406]: 2025-12-09 10:49:44.947722857 +0000 UTC m=+0.092236854 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 10:49:46 compute-0 nova_compute[189493]: 2025-12-09 10:49:46.236 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:47 compute-0 nova_compute[189493]: 2025-12-09 10:49:47.023 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:48 compute-0 nova_compute[189493]: 2025-12-09 10:49:48.512 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:49:48 compute-0 nova_compute[189493]: 2025-12-09 10:49:48.545 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Triggering sync for uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  9 10:49:48 compute-0 nova_compute[189493]: 2025-12-09 10:49:48.546 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:49:48 compute-0 nova_compute[189493]: 2025-12-09 10:49:48.547 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:49:48 compute-0 nova_compute[189493]: 2025-12-09 10:49:48.588 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.041s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:49:51 compute-0 nova_compute[189493]: 2025-12-09 10:49:51.241 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:52 compute-0 nova_compute[189493]: 2025-12-09 10:49:52.027 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:53 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:49:53.942 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 10:49:53 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:49:53.943 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  9 10:49:53 compute-0 nova_compute[189493]: 2025-12-09 10:49:53.944 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:53 compute-0 podman[240431]: 2025-12-09 10:49:53.956507074 +0000 UTC m=+0.098564493 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3)
Dec  9 10:49:56 compute-0 nova_compute[189493]: 2025-12-09 10:49:56.246 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:57 compute-0 nova_compute[189493]: 2025-12-09 10:49:57.029 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:49:57 compute-0 podman[240451]: 2025-12-09 10:49:57.919520672 +0000 UTC m=+0.074807159 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 10:49:59 compute-0 podman[203687]: time="2025-12-09T10:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:49:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:49:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4761 "" "Go-http-client/1.1"
Dec  9 10:49:59 compute-0 nova_compute[189493]: 2025-12-09 10:49:59.976 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:49:59 compute-0 nova_compute[189493]: 2025-12-09 10:49:59.978 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:49:59.999 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.088 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.089 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.102 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.102 189497 INFO nova.compute.claims [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.240 189497 DEBUG nova.compute.provider_tree [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.253 189497 DEBUG nova.scheduler.client.report [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.282 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.283 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.325 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.326 189497 DEBUG nova.network.neutron [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.349 189497 INFO nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.376 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.464 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.466 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.466 189497 INFO nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Creating image(s)#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.467 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.467 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.468 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.486 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.583 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.585 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.586 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.610 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.694 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.695 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.738 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.739 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.739 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.805 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.807 189497 DEBUG nova.virt.disk.api [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking if we can resize image /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.808 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.871 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.873 189497 DEBUG nova.virt.disk.api [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Cannot resize image /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.873 189497 DEBUG nova.objects.instance [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 1bddf2bf-8932-4428-97d7-7342a7ec414b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.901 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.902 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.903 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.916 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.985 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.987 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:00 compute-0 nova_compute[189493]: 2025-12-09 10:50:00.988 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.005 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.102 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.103 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.145 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 1073741824" returned: 0 in 0.042s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.146 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.159s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.147 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.210 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.212 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.214 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Ensure instance console log exists: /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.215 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.216 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.217 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.251 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:01 compute-0 openstack_network_exporter[205823]: ERROR   10:50:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:50:01 compute-0 openstack_network_exporter[205823]: ERROR   10:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:50:01 compute-0 openstack_network_exporter[205823]: ERROR   10:50:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:50:01 compute-0 openstack_network_exporter[205823]: ERROR   10:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:50:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:50:01 compute-0 openstack_network_exporter[205823]: ERROR   10:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:50:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.683 189497 DEBUG nova.network.neutron [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Successfully updated port: 7819acf8-daa2-4391-96d4-ef33c260f794 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.701 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.701 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.702 189497 DEBUG nova.network.neutron [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.807 189497 DEBUG nova.compute.manager [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-changed-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.808 189497 DEBUG nova.compute.manager [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Refreshing instance network info cache due to event network-changed-7819acf8-daa2-4391-96d4-ef33c260f794. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.808 189497 DEBUG oslo_concurrency.lockutils [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:50:01 compute-0 nova_compute[189493]: 2025-12-09 10:50:01.872 189497 DEBUG nova.network.neutron [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  9 10:50:01 compute-0 podman[240503]: 2025-12-09 10:50:01.917205463 +0000 UTC m=+0.078363519 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.031 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.442 189497 DEBUG nova.network.neutron [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.464 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.465 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance network_info: |[{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.466 189497 DEBUG oslo_concurrency.lockutils [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.466 189497 DEBUG nova.network.neutron [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Refreshing network info cache for port 7819acf8-daa2-4391-96d4-ef33c260f794 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.473 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Start _get_guest_xml network_info=[{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'size': 0, 'image_id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.484 189497 WARNING nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.505 189497 DEBUG nova.virt.libvirt.host [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.507 189497 DEBUG nova.virt.libvirt.host [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.513 189497 DEBUG nova.virt.libvirt.host [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.514 189497 DEBUG nova.virt.libvirt.host [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.514 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.515 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-09T10:47:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cf91b364-8467-4d1e-8c92-f7d1fab99905',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.516 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.516 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.517 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.517 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.517 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.518 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.519 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.519 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.519 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.520 189497 DEBUG nova.virt.hardware [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.530 189497 DEBUG nova.virt.libvirt.vif [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',id=2,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-ljrndswf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:50:00Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  9 10:50:02 compute-0 nova_compute[189493]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=1bddf2bf-8932-4428-97d7-7342a7ec414b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.531 189497 DEBUG nova.network.os_vif_util [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.534 189497 DEBUG nova.network.os_vif_util [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.536 189497 DEBUG nova.objects.instance [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1bddf2bf-8932-4428-97d7-7342a7ec414b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.552 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] End _get_guest_xml xml=<domain type="kvm">
Dec  9 10:50:02 compute-0 nova_compute[189493]:  <uuid>1bddf2bf-8932-4428-97d7-7342a7ec414b</uuid>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  <name>instance-00000002</name>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  <memory>524288</memory>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  <vcpu>1</vcpu>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  <metadata>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <nova:name>vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l</nova:name>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <nova:creationTime>2025-12-09 10:50:02</nova:creationTime>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <nova:flavor name="m1.small">
Dec  9 10:50:02 compute-0 nova_compute[189493]:        <nova:memory>512</nova:memory>
Dec  9 10:50:02 compute-0 nova_compute[189493]:        <nova:disk>1</nova:disk>
Dec  9 10:50:02 compute-0 nova_compute[189493]:        <nova:swap>0</nova:swap>
Dec  9 10:50:02 compute-0 nova_compute[189493]:        <nova:ephemeral>1</nova:ephemeral>
Dec  9 10:50:02 compute-0 nova_compute[189493]:        <nova:vcpus>1</nova:vcpus>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      </nova:flavor>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <nova:owner>
Dec  9 10:50:02 compute-0 nova_compute[189493]:        <nova:user uuid="e6d3a937c2a74eb0816d9f63820935e0">admin</nova:user>
Dec  9 10:50:02 compute-0 nova_compute[189493]:        <nova:project uuid="736bbfddbeea47e3ac9d863ba120b8f2">admin</nova:project>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      </nova:owner>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <nova:root type="image" uuid="53d12211-5d5c-4333-b3ee-e3dcf1663767"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <nova:ports>
Dec  9 10:50:02 compute-0 nova_compute[189493]:        <nova:port uuid="7819acf8-daa2-4391-96d4-ef33c260f794">
Dec  9 10:50:02 compute-0 nova_compute[189493]:          <nova:ip type="fixed" address="192.168.0.212" ipVersion="4"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:        </nova:port>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      </nova:ports>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    </nova:instance>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  </metadata>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  <sysinfo type="smbios">
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <system>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <entry name="manufacturer">RDO</entry>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <entry name="product">OpenStack Compute</entry>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <entry name="serial">1bddf2bf-8932-4428-97d7-7342a7ec414b</entry>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <entry name="uuid">1bddf2bf-8932-4428-97d7-7342a7ec414b</entry>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <entry name="family">Virtual Machine</entry>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    </system>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  </sysinfo>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  <os>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <boot dev="hd"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <smbios mode="sysinfo"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  </os>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  <features>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <acpi/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <apic/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <vmcoreinfo/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  </features>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  <clock offset="utc">
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <timer name="pit" tickpolicy="delay"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <timer name="hpet" present="no"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  </clock>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  <cpu mode="host-model" match="exact">
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <topology sockets="1" cores="1" threads="1"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  </cpu>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  <devices>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <disk type="file" device="disk">
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <target dev="vda" bus="virtio"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <disk type="file" device="disk">
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <target dev="vdb" bus="virtio"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <disk type="file" device="cdrom">
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <driver name="qemu" type="raw" cache="none"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.config"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <target dev="sda" bus="sata"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <interface type="ethernet">
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <mac address="fa:16:3e:01:4e:b4"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <model type="virtio"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <driver name="vhost" rx_queue_size="512"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <mtu size="1442"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <target dev="tap7819acf8-da"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    </interface>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <serial type="pty">
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <log file="/var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/console.log" append="off"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    </serial>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <video>
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <model type="virtio"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    </video>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <input type="tablet" bus="usb"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <rng model="virtio">
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <backend model="random">/dev/urandom</backend>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    </rng>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <controller type="usb" index="0"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    <memballoon model="virtio">
Dec  9 10:50:02 compute-0 nova_compute[189493]:      <stats period="10"/>
Dec  9 10:50:02 compute-0 nova_compute[189493]:    </memballoon>
Dec  9 10:50:02 compute-0 nova_compute[189493]:  </devices>
Dec  9 10:50:02 compute-0 nova_compute[189493]: </domain>
Dec  9 10:50:02 compute-0 nova_compute[189493]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.552 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Preparing to wait for external event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.553 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.553 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.553 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.554 189497 DEBUG nova.virt.libvirt.vif [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',id=2,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-ljrndswf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:50:00Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  9 10:50:02 compute-0 nova_compute[189493]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=1bddf2bf-8932-4428-97d7-7342a7ec414b,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.555 189497 DEBUG nova.network.os_vif_util [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.555 189497 DEBUG nova.network.os_vif_util [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.556 189497 DEBUG os_vif [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.557 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.557 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.558 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.569 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.570 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7819acf8-da, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.571 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7819acf8-da, col_values=(('external_ids', {'iface-id': '7819acf8-daa2-4391-96d4-ef33c260f794', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:01:4e:b4', 'vm-uuid': '1bddf2bf-8932-4428-97d7-7342a7ec414b'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.574 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:02 compute-0 NetworkManager[56302]: <info>  [1765277402.5761] manager: (tap7819acf8-da): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.577 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.582 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.583 189497 INFO os_vif [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da')#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.640 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.640 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.640 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.640 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No VIF found with MAC fa:16:3e:01:4e:b4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.641 189497 INFO nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Using config drive#033[00m
Dec  9 10:50:02 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 10:50:02.530 189497 DEBUG nova.virt.libvirt.vif [None req-94e35f23-c0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 10:50:02 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 10:50:02.554 189497 DEBUG nova.virt.libvirt.vif [None req-94e35f23-c0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 10:50:02 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:02.945 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.965 189497 INFO nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Creating config drive at /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.config#033[00m
Dec  9 10:50:02 compute-0 nova_compute[189493]: 2025-12-09 10:50:02.975 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7f79nqi execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.123 189497 DEBUG oslo_concurrency.processutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpj7f79nqi" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:03 compute-0 kernel: tap7819acf8-da: entered promiscuous mode
Dec  9 10:50:03 compute-0 NetworkManager[56302]: <info>  [1765277403.2710] manager: (tap7819acf8-da): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Dec  9 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.272 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:03 compute-0 ovn_controller[97780]: 2025-12-09T10:50:03Z|00035|binding|INFO|Claiming lport 7819acf8-daa2-4391-96d4-ef33c260f794 for this chassis.
Dec  9 10:50:03 compute-0 ovn_controller[97780]: 2025-12-09T10:50:03Z|00036|binding|INFO|7819acf8-daa2-4391-96d4-ef33c260f794: Claiming fa:16:3e:01:4e:b4 192.168.0.212
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.280 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:4e:b4 192.168.0.212'], port_security=['fa:16:3e:01:4e:b4 192.168.0.212'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-x2vp5udxgoax-du67okrzyrz6-port-copozzjp5fc5', 'neutron:cidrs': '192.168.0.212/24', 'neutron:device_id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-x2vp5udxgoax-du67okrzyrz6-port-copozzjp5fc5', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=7819acf8-daa2-4391-96d4-ef33c260f794) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.282 106644 INFO neutron.agent.ovn.metadata.agent [-] Port 7819acf8-daa2-4391-96d4-ef33c260f794 in datapath c5af7354-5afe-400a-9e13-5500648117d8 bound to our chassis#033[00m
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.283 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8#033[00m
Dec  9 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.291 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:03 compute-0 ovn_controller[97780]: 2025-12-09T10:50:03Z|00037|binding|INFO|Setting lport 7819acf8-daa2-4391-96d4-ef33c260f794 ovn-installed in OVS
Dec  9 10:50:03 compute-0 ovn_controller[97780]: 2025-12-09T10:50:03Z|00038|binding|INFO|Setting lport 7819acf8-daa2-4391-96d4-ef33c260f794 up in Southbound
Dec  9 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.300 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.302 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.309 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[d7af2d4c-4cf6-48c3-9684-1d29fd335f9c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:50:03 compute-0 systemd-udevd[240552]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 10:50:03 compute-0 NetworkManager[56302]: <info>  [1765277403.3349] device (tap7819acf8-da): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  9 10:50:03 compute-0 NetworkManager[56302]: <info>  [1765277403.3389] device (tap7819acf8-da): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  9 10:50:03 compute-0 systemd-machined[155790]: New machine qemu-2-instance-00000002.
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.344 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[c4e68d45-bcc7-4369-9f51-d05763e56c82]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.348 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[9a54e29d-9d59-48c0-a685-b4d8d040c753]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:50:03 compute-0 podman[240533]: 2025-12-09 10:50:03.357847378 +0000 UTC m=+0.114163158 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.openshift.expose-services=, io.buildah.version=1.29.0, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, config_id=edpm, maintainer=Red Hat, Inc., release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler)
Dec  9 10:50:03 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.382 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[2f47f1d0-b27a-4250-9bca-8599296d2ba6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.411 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[7a8b04a6-68b5-484d-b10a-853f5769777c]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 5, 'rx_bytes': 574, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 28193, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240567, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.430 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[4231f5c7-a8b3-49ae-b7f5-ff839d4d809f]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240570, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240570, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.433 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.435 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.437 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.437 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.438 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:50:03 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:03.438 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.545 189497 DEBUG nova.compute.manager [req-e5cc09a2-9601-4a82-b31a-6ad0e753f46c req-1e8ad80b-c369-4af6-8d08-64802be53951 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.545 189497 DEBUG oslo_concurrency.lockutils [req-e5cc09a2-9601-4a82-b31a-6ad0e753f46c req-1e8ad80b-c369-4af6-8d08-64802be53951 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.546 189497 DEBUG oslo_concurrency.lockutils [req-e5cc09a2-9601-4a82-b31a-6ad0e753f46c req-1e8ad80b-c369-4af6-8d08-64802be53951 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.546 189497 DEBUG oslo_concurrency.lockutils [req-e5cc09a2-9601-4a82-b31a-6ad0e753f46c req-1e8ad80b-c369-4af6-8d08-64802be53951 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:03 compute-0 nova_compute[189493]: 2025-12-09 10:50:03.546 189497 DEBUG nova.compute.manager [req-e5cc09a2-9601-4a82-b31a-6ad0e753f46c req-1e8ad80b-c369-4af6-8d08-64802be53951 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Processing event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.012 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.013 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277404.0121238, 1bddf2bf-8932-4428-97d7-7342a7ec414b => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.014 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] VM Started (Lifecycle Event)#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.020 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.026 189497 INFO nova.virt.libvirt.driver [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance spawned successfully.#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.026 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.032 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.036 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.046 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.046 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.047 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.047 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.047 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.048 189497 DEBUG nova.virt.libvirt.driver [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.059 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.059 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277404.0122805, 1bddf2bf-8932-4428-97d7-7342a7ec414b => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.060 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] VM Paused (Lifecycle Event)#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.083 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.089 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277404.0183938, 1bddf2bf-8932-4428-97d7-7342a7ec414b => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.089 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] VM Resumed (Lifecycle Event)#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.107 189497 INFO nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Took 3.64 seconds to spawn the instance on the hypervisor.#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.108 189497 DEBUG nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.109 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.118 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.153 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.174 189497 INFO nova.compute.manager [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Took 4.13 seconds to build instance.#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.191 189497 DEBUG oslo_concurrency.lockutils [None req-94e35f23-c0f6-4b84-9814-2c6fdae43941 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.260 189497 DEBUG nova.network.neutron [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated VIF entry in instance network info cache for port 7819acf8-daa2-4391-96d4-ef33c260f794. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.260 189497 DEBUG nova.network.neutron [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:50:04 compute-0 nova_compute[189493]: 2025-12-09 10:50:04.476 189497 DEBUG oslo_concurrency.lockutils [req-252495f9-0493-4e2c-85e7-1505919e3e68 req-9f92104c-132e-457c-aaef-52b3f0016a9d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.624 189497 DEBUG nova.compute.manager [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.625 189497 DEBUG oslo_concurrency.lockutils [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.625 189497 DEBUG oslo_concurrency.lockutils [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.626 189497 DEBUG oslo_concurrency.lockutils [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.626 189497 DEBUG nova.compute.manager [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] No waiting events found dispatching network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 10:50:05 compute-0 nova_compute[189493]: 2025-12-09 10:50:05.626 189497 WARNING nova.compute.manager [req-88d4ec09-bd40-4dc0-8eb1-33b71616363a req-927218af-638d-4a15-ae91-4182db365b57 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received unexpected event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 for instance with vm_state active and task_state None.#033[00m
Dec  9 10:50:06 compute-0 podman[240584]: 2025-12-09 10:50:06.564098475 +0000 UTC m=+0.120758505 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  9 10:50:06 compute-0 podman[240585]: 2025-12-09 10:50:06.571429454 +0000 UTC m=+0.121091824 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  9 10:50:07 compute-0 nova_compute[189493]: 2025-12-09 10:50:07.037 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:07 compute-0 nova_compute[189493]: 2025-12-09 10:50:07.576 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:11 compute-0 podman[240621]: 2025-12-09 10:50:11.008969312 +0000 UTC m=+0.136925474 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, vcs-type=git, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, maintainer=Red Hat, Inc., version=9.6, vendor=Red Hat, Inc., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7)
Dec  9 10:50:12 compute-0 nova_compute[189493]: 2025-12-09 10:50:12.041 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:12 compute-0 nova_compute[189493]: 2025-12-09 10:50:12.583 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:13 compute-0 podman[240643]: 2025-12-09 10:50:13.052401427 +0000 UTC m=+0.207176852 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 10:50:15 compute-0 podman[240669]: 2025-12-09 10:50:15.988884494 +0000 UTC m=+0.127874097 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 10:50:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:16.977 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:16.978 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:50:16.979 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:17 compute-0 nova_compute[189493]: 2025-12-09 10:50:17.042 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:17 compute-0 nova_compute[189493]: 2025-12-09 10:50:17.588 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:22 compute-0 nova_compute[189493]: 2025-12-09 10:50:22.044 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:22 compute-0 nova_compute[189493]: 2025-12-09 10:50:22.590 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.288 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.289 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.289 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.291 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.293 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.302 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1bddf2bf-8932-4428-97d7-7342a7ec414b from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  9 10:50:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:23.751 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1bddf2bf-8932-4428-97d7-7342a7ec414b -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}c39d506960fbc5044d0bc54d9594567a78a3d14170701e46780a30eef7979125" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.366 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Tue, 09 Dec 2025 10:50:23 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-fbb073f9-cf4a-40e5-9984-d6fe4fa8bd9a x-openstack-request-id: req-fbb073f9-cf4a-40e5-9984-d6fe4fa8bd9a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.366 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1bddf2bf-8932-4428-97d7-7342a7ec414b", "name": "vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l", "status": "ACTIVE", "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "user_id": "e6d3a937c2a74eb0816d9f63820935e0", "metadata": {"metering.server_group": "24f6e5b2-dd43-46f1-87a4-e2efc1300914"}, "hostId": "17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee", "image": {"id": "53d12211-5d5c-4333-b3ee-e3dcf1663767", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/53d12211-5d5c-4333-b3ee-e3dcf1663767"}]}, "flavor": {"id": "cf91b364-8467-4d1e-8c92-f7d1fab99905", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cf91b364-8467-4d1e-8c92-f7d1fab99905"}]}, "created": "2025-12-09T10:49:58Z", "updated": "2025-12-09T10:50:04Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.212", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:01:4e:b4"}, {"version": 4, "addr": "192.168.122.172", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:01:4e:b4"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1bddf2bf-8932-4428-97d7-7342a7ec414b"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1bddf2bf-8932-4428-97d7-7342a7ec414b"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-09T10:50:04.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.366 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1bddf2bf-8932-4428-97d7-7342a7ec414b used request id req-fbb073f9-cf4a-40e5-9984-d6fe4fa8bd9a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.370 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.374 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.376 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}c39d506960fbc5044d0bc54d9594567a78a3d14170701e46780a30eef7979125" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.743 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1850 Content-Type: application/json Date: Tue, 09 Dec 2025 10:50:24 GMT Keep-Alive: timeout=5, max=99 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ddb7cfc0-30dd-4590-a7cd-6549c406cf02 x-openstack-request-id: req-ddb7cfc0-30dd-4590-a7cd-6549c406cf02 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.743 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f", "name": "test_0", "status": "ACTIVE", "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "user_id": "e6d3a937c2a74eb0816d9f63820935e0", "metadata": {}, "hostId": "17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee", "image": {"id": "53d12211-5d5c-4333-b3ee-e3dcf1663767", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/53d12211-5d5c-4333-b3ee-e3dcf1663767"}]}, "flavor": {"id": "cf91b364-8467-4d1e-8c92-f7d1fab99905", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cf91b364-8467-4d1e-8c92-f7d1fab99905"}]}, "created": "2025-12-09T10:48:38Z", "updated": "2025-12-09T10:48:53Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.250", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c7:65:39"}, {"version": 4, "addr": "192.168.122.226", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:c7:65:39"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-09T10:48:53.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.743 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f used request id req-ddb7cfc0-30dd-4590-a7cd-6549c406cf02 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.747 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.747 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.747 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.748 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.749 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T10:50:24.748586) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.756 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1bddf2bf-8932-4428-97d7-7342a7ec414b / tap7819acf8-da inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.757 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.764 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f / tap2c684388-b6 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.764 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.766 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.767 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.768 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.768 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.768 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.769 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.770 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T10:50:24.769458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.795 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.796 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.796 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.824 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.824 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.825 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.825 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.826 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T10:50:24.826262) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.828 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.828 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.828 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.829 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T10:50:24.827946) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T10:50:24.829198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:24 compute-0 nova_compute[189493]: 2025-12-09 10:50:24.875 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:50:24 compute-0 nova_compute[189493]: 2025-12-09 10:50:24.881 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.938 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 18348032 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.939 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:24.939 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:24 compute-0 podman[240696]: 2025-12-09 10:50:24.962684392 +0000 UTC m=+0.121557307 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.033 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.034 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.034 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.035 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.036 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.036 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.037 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T10:50:25.036142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.037 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.038 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.038 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 331172565 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T10:50:25.038138) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.038 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.039 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 1023978 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.039 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.039 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.040 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 573 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.041 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T10:50:25.041408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.042 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.042 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.042 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.043 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.044 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.045 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.045 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.046 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.046 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.046 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.047 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T10:50:25.044620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.047 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.047 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.047 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.048 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T10:50:25.048234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.076 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 20610000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.102 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 34520000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.103 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.103 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.103 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.103 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.103 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.104 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.104 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.104 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.105 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.105 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.106 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.107 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.107 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.108 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.108 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T10:50:25.103744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.108 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.108 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.109 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.109 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.110 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.110 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.111 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.111 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.112 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.112 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.112 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.112 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.113 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.113 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.114 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.114 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T10:50:25.108145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T10:50:25.112283) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T10:50:25.115933) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.117 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2104 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T10:50:25.117400) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.119 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.119 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.119 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.119 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T10:50:25.119052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.120 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.120 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.120 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.120 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T10:50:25.121232) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T10:50:25.122360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-09T10:50:25.123279) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.123 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l>, <NovaLikeServer: test_0>]
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.124 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.124 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.124 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.124 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T10:50:25.124977) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.125 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T10:50:25.125953) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T10:50:25.127215) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.127 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.128 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T10:50:25.128483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.130 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T10:50:25.130083) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.130 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.130 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T10:50:25.131345) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.131 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 1bddf2bf-8932-4428-97d7-7342a7ec414b: ceilometer.compute.pollsters.NoVolumeException
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.94921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.133 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.133 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-09T10:50:25.133011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.133 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l>, <NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l>, <NovaLikeServer: test_0>]
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.134 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.135 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:50:25.136 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:50:25 compute-0 nova_compute[189493]: 2025-12-09 10:50:25.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:50:25 compute-0 nova_compute[189493]: 2025-12-09 10:50:25.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:50:26 compute-0 nova_compute[189493]: 2025-12-09 10:50:26.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.047 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.593 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.879 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.882 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.883 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.883 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:50:27 compute-0 nova_compute[189493]: 2025-12-09 10:50:27.981 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.054 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.055 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.126 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.128 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.186 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.190 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.248 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.255 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.352 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.355 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.421 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.423 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.505 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.512 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:50:28 compute-0 nova_compute[189493]: 2025-12-09 10:50:28.617 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:50:28 compute-0 podman[240742]: 2025-12-09 10:50:28.980590153 +0000 UTC m=+0.128547325 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.113 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.115 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5106MB free_disk=72.18374252319336GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.115 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.116 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.201 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.202 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.202 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.203 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.261 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.279 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.300 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:50:29 compute-0 nova_compute[189493]: 2025-12-09 10:50:29.301 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:50:29 compute-0 podman[203687]: time="2025-12-09T10:50:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:50:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:50:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:50:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:50:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4778 "" "Go-http-client/1.1"
Dec  9 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.303 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.304 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.305 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.690 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.690 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.691 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 10:50:30 compute-0 nova_compute[189493]: 2025-12-09 10:50:30.691 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:50:31 compute-0 openstack_network_exporter[205823]: ERROR   10:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:50:31 compute-0 openstack_network_exporter[205823]: ERROR   10:50:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:50:31 compute-0 openstack_network_exporter[205823]: ERROR   10:50:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:50:31 compute-0 openstack_network_exporter[205823]: ERROR   10:50:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:50:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:50:31 compute-0 openstack_network_exporter[205823]: ERROR   10:50:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:50:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.688 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.711 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.712 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.714 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.715 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:50:31 compute-0 nova_compute[189493]: 2025-12-09 10:50:31.716 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:50:32 compute-0 nova_compute[189493]: 2025-12-09 10:50:32.050 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:32 compute-0 nova_compute[189493]: 2025-12-09 10:50:32.597 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:32 compute-0 podman[240764]: 2025-12-09 10:50:32.999417429 +0000 UTC m=+0.138858363 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  9 10:50:33 compute-0 ovn_controller[97780]: 2025-12-09T10:50:33Z|00039|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Dec  9 10:50:34 compute-0 podman[240784]: 2025-12-09 10:50:34.020424706 +0000 UTC m=+0.153148966 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=kepler, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release=1214.1726694543, vcs-type=git, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, managed_by=edpm_ansible, release-0.7.12=, build-date=2024-09-18T21:23:30)
Dec  9 10:50:37 compute-0 podman[240802]: 2025-12-09 10:50:37.003334304 +0000 UTC m=+0.125258916 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  9 10:50:37 compute-0 podman[240803]: 2025-12-09 10:50:37.005441261 +0000 UTC m=+0.132933263 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  9 10:50:37 compute-0 nova_compute[189493]: 2025-12-09 10:50:37.054 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:37 compute-0 nova_compute[189493]: 2025-12-09 10:50:37.602 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:39 compute-0 ovn_controller[97780]: 2025-12-09T10:50:39Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:01:4e:b4 192.168.0.212
Dec  9 10:50:39 compute-0 ovn_controller[97780]: 2025-12-09T10:50:39Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:01:4e:b4 192.168.0.212
Dec  9 10:50:42 compute-0 podman[240855]: 2025-12-09 10:50:42.005479165 +0000 UTC m=+0.146547109 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.expose-services=, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350)
Dec  9 10:50:42 compute-0 nova_compute[189493]: 2025-12-09 10:50:42.056 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:42 compute-0 nova_compute[189493]: 2025-12-09 10:50:42.606 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:44 compute-0 podman[240874]: 2025-12-09 10:50:44.006294762 +0000 UTC m=+0.152340455 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  9 10:50:46 compute-0 podman[240900]: 2025-12-09 10:50:46.967738042 +0000 UTC m=+0.095014764 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 10:50:47 compute-0 nova_compute[189493]: 2025-12-09 10:50:47.059 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:47 compute-0 nova_compute[189493]: 2025-12-09 10:50:47.610 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:52 compute-0 nova_compute[189493]: 2025-12-09 10:50:52.062 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:52 compute-0 nova_compute[189493]: 2025-12-09 10:50:52.614 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:55 compute-0 podman[240923]: 2025-12-09 10:50:55.982744446 +0000 UTC m=+0.129773068 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 10:50:57 compute-0 nova_compute[189493]: 2025-12-09 10:50:57.063 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:57 compute-0 nova_compute[189493]: 2025-12-09 10:50:57.616 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:50:59 compute-0 podman[203687]: time="2025-12-09T10:50:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:50:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:50:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:50:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:50:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Dec  9 10:50:59 compute-0 podman[240943]: 2025-12-09 10:50:59.974345659 +0000 UTC m=+0.115701400 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 10:51:01 compute-0 openstack_network_exporter[205823]: ERROR   10:51:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:51:01 compute-0 openstack_network_exporter[205823]: ERROR   10:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:51:01 compute-0 openstack_network_exporter[205823]: ERROR   10:51:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:51:01 compute-0 openstack_network_exporter[205823]: ERROR   10:51:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:51:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:51:01 compute-0 openstack_network_exporter[205823]: ERROR   10:51:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:51:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:51:02 compute-0 nova_compute[189493]: 2025-12-09 10:51:02.066 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:02 compute-0 nova_compute[189493]: 2025-12-09 10:51:02.620 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:03 compute-0 podman[240969]: 2025-12-09 10:51:03.968131453 +0000 UTC m=+0.113601314 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  9 10:51:05 compute-0 podman[240987]: 2025-12-09 10:51:05.007701569 +0000 UTC m=+0.147095684 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, io.openshift.expose-services=, io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, name=ubi9, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc., release=1214.1726694543)
Dec  9 10:51:07 compute-0 nova_compute[189493]: 2025-12-09 10:51:07.070 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:07 compute-0 nova_compute[189493]: 2025-12-09 10:51:07.624 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:07 compute-0 podman[241006]: 2025-12-09 10:51:07.970922309 +0000 UTC m=+0.112503135 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  9 10:51:07 compute-0 podman[241007]: 2025-12-09 10:51:07.999331202 +0000 UTC m=+0.135180734 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  9 10:51:12 compute-0 nova_compute[189493]: 2025-12-09 10:51:12.074 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:12 compute-0 nova_compute[189493]: 2025-12-09 10:51:12.628 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:12 compute-0 podman[241045]: 2025-12-09 10:51:12.954367635 +0000 UTC m=+0.109064332 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, container_name=openstack_network_exporter, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7)
Dec  9 10:51:14 compute-0 podman[241065]: 2025-12-09 10:51:14.885022725 +0000 UTC m=+0.181667983 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  9 10:51:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:51:16.979 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:51:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:51:16.979 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:51:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:51:16.980 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:51:17 compute-0 nova_compute[189493]: 2025-12-09 10:51:17.076 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:17 compute-0 nova_compute[189493]: 2025-12-09 10:51:17.633 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:17 compute-0 podman[241090]: 2025-12-09 10:51:17.980957681 +0000 UTC m=+0.118993280 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 10:51:22 compute-0 nova_compute[189493]: 2025-12-09 10:51:22.079 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:22 compute-0 nova_compute[189493]: 2025-12-09 10:51:22.636 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:25 compute-0 nova_compute[189493]: 2025-12-09 10:51:25.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:51:25 compute-0 nova_compute[189493]: 2025-12-09 10:51:25.844 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:51:25 compute-0 nova_compute[189493]: 2025-12-09 10:51:25.845 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:51:26 compute-0 nova_compute[189493]: 2025-12-09 10:51:26.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:51:27 compute-0 podman[241114]: 2025-12-09 10:51:27.008097933 +0000 UTC m=+0.139249613 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3)
Dec  9 10:51:27 compute-0 nova_compute[189493]: 2025-12-09 10:51:27.083 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:27 compute-0 nova_compute[189493]: 2025-12-09 10:51:27.639 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:27 compute-0 nova_compute[189493]: 2025-12-09 10:51:27.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:51:28 compute-0 nova_compute[189493]: 2025-12-09 10:51:28.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:51:28 compute-0 nova_compute[189493]: 2025-12-09 10:51:28.868 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:51:28 compute-0 nova_compute[189493]: 2025-12-09 10:51:28.869 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:51:29 compute-0 nova_compute[189493]: 2025-12-09 10:51:29.682 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:51:29 compute-0 nova_compute[189493]: 2025-12-09 10:51:29.683 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:51:29 compute-0 nova_compute[189493]: 2025-12-09 10:51:29.683 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 10:51:29 compute-0 podman[203687]: time="2025-12-09T10:51:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:51:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:51:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:51:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:51:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4784 "" "Go-http-client/1.1"
Dec  9 10:51:30 compute-0 podman[241133]: 2025-12-09 10:51:30.981693974 +0000 UTC m=+0.114493088 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 10:51:31 compute-0 openstack_network_exporter[205823]: ERROR   10:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:51:31 compute-0 openstack_network_exporter[205823]: ERROR   10:51:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:51:31 compute-0 openstack_network_exporter[205823]: ERROR   10:51:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:51:31 compute-0 openstack_network_exporter[205823]: ERROR   10:51:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:51:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:51:31 compute-0 openstack_network_exporter[205823]: ERROR   10:51:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:51:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.716 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.740 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.741 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.741 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.741 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.766 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.767 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.767 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.768 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.871 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.970 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:51:31 compute-0 nova_compute[189493]: 2025-12-09 10:51:31.971 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.067 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.069 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.101 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.174 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.176 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.262 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.278 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.351 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.353 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.420 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.421 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.531 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.533 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.631 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:51:32 compute-0 nova_compute[189493]: 2025-12-09 10:51:32.642 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.231 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.232 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=72.16302108764648GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.232 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.233 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.489 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.490 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.490 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.490 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.556 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.577 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.580 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.580 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.682 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:51:33 compute-0 nova_compute[189493]: 2025-12-09 10:51:33.683 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:51:34 compute-0 podman[241182]: 2025-12-09 10:51:34.965134441 +0000 UTC m=+0.110444990 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec  9 10:51:35 compute-0 podman[241201]: 2025-12-09 10:51:35.973501068 +0000 UTC m=+0.113928982 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, io.buildah.version=1.29.0, vcs-type=git, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, version=9.4, container_name=kepler, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Dec  9 10:51:37 compute-0 nova_compute[189493]: 2025-12-09 10:51:37.090 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:37 compute-0 nova_compute[189493]: 2025-12-09 10:51:37.647 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:38 compute-0 podman[241223]: 2025-12-09 10:51:38.336511959 +0000 UTC m=+0.100782709 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  9 10:51:38 compute-0 podman[241222]: 2025-12-09 10:51:38.352693424 +0000 UTC m=+0.110180811 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  9 10:51:42 compute-0 nova_compute[189493]: 2025-12-09 10:51:42.094 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:42 compute-0 nova_compute[189493]: 2025-12-09 10:51:42.651 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:43 compute-0 podman[241259]: 2025-12-09 10:51:43.957662635 +0000 UTC m=+0.107100950 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=)
Dec  9 10:51:46 compute-0 podman[241280]: 2025-12-09 10:51:46.016732216 +0000 UTC m=+0.149034427 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202)
Dec  9 10:51:47 compute-0 nova_compute[189493]: 2025-12-09 10:51:47.097 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:47 compute-0 nova_compute[189493]: 2025-12-09 10:51:47.653 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:48 compute-0 podman[241304]: 2025-12-09 10:51:48.952590649 +0000 UTC m=+0.104507040 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 10:51:52 compute-0 nova_compute[189493]: 2025-12-09 10:51:52.101 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:52 compute-0 nova_compute[189493]: 2025-12-09 10:51:52.656 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:57 compute-0 nova_compute[189493]: 2025-12-09 10:51:57.103 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:57 compute-0 nova_compute[189493]: 2025-12-09 10:51:57.660 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:51:57 compute-0 podman[241329]: 2025-12-09 10:51:57.996749796 +0000 UTC m=+0.131911855 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  9 10:51:59 compute-0 podman[203687]: time="2025-12-09T10:51:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:51:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:51:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:51:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:51:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4769 "" "Go-http-client/1.1"
Dec  9 10:52:01 compute-0 openstack_network_exporter[205823]: ERROR   10:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:52:01 compute-0 openstack_network_exporter[205823]: ERROR   10:52:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:52:01 compute-0 openstack_network_exporter[205823]: ERROR   10:52:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:52:01 compute-0 openstack_network_exporter[205823]: ERROR   10:52:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:52:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:52:01 compute-0 openstack_network_exporter[205823]: ERROR   10:52:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:52:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:52:02 compute-0 podman[241348]: 2025-12-09 10:52:02.000459116 +0000 UTC m=+0.133890029 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 10:52:02 compute-0 nova_compute[189493]: 2025-12-09 10:52:02.105 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:02 compute-0 nova_compute[189493]: 2025-12-09 10:52:02.665 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:04 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  9 10:52:05 compute-0 podman[241373]: 2025-12-09 10:52:05.951322577 +0000 UTC m=+0.111718554 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS)
Dec  9 10:52:07 compute-0 podman[241393]: 2025-12-09 10:52:07.005637549 +0000 UTC m=+0.145409129 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, version=9.4, config_id=edpm, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container)
Dec  9 10:52:07 compute-0 nova_compute[189493]: 2025-12-09 10:52:07.108 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:07 compute-0 nova_compute[189493]: 2025-12-09 10:52:07.667 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:08 compute-0 podman[241413]: 2025-12-09 10:52:08.818503655 +0000 UTC m=+0.138220085 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  9 10:52:08 compute-0 podman[241414]: 2025-12-09 10:52:08.82315648 +0000 UTC m=+0.126205672 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  9 10:52:12 compute-0 nova_compute[189493]: 2025-12-09 10:52:12.111 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:12 compute-0 nova_compute[189493]: 2025-12-09 10:52:12.671 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:14 compute-0 podman[241450]: 2025-12-09 10:52:14.81553801 +0000 UTC m=+0.114122708 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, vcs-type=git)
Dec  9 10:52:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:52:16.981 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:52:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:52:16.981 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:52:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:52:16.982 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:52:17 compute-0 podman[241470]: 2025-12-09 10:52:17.059153682 +0000 UTC m=+0.195290029 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, tcib_managed=true, managed_by=edpm_ansible)
Dec  9 10:52:17 compute-0 nova_compute[189493]: 2025-12-09 10:52:17.115 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:17 compute-0 nova_compute[189493]: 2025-12-09 10:52:17.675 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:19 compute-0 podman[241498]: 2025-12-09 10:52:19.978126922 +0000 UTC m=+0.118434195 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 10:52:22 compute-0 nova_compute[189493]: 2025-12-09 10:52:22.116 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:22 compute-0 nova_compute[189493]: 2025-12-09 10:52:22.679 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.289 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.290 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.321 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.327 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.328 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.331 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T10:52:23.328476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.336 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.342 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.343 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.343 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.343 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.343 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.344 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.345 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T10:52:23.344041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.386 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.386 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.387 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.429 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.430 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.431 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.432 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.432 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 43 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.433 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.434 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T10:52:23.432611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.435 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T10:52:23.435322) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.436 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.436 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.437 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.437 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.437 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T10:52:23.437715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.515 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.516 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.516 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.629 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.630 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.630 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.632 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.632 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.633 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.634 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 439593872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.635 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 92612690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.635 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 59905939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.635 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.636 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.636 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.637 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.638 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.638 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.638 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.639 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.639 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.640 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.641 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.641 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.641 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.642 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.642 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.643 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T10:52:23.632357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T10:52:23.634454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T10:52:23.637804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T10:52:23.640875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T10:52:23.643531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.669 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 69030000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.698 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 36320000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.699 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.699 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.699 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.700 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.700 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.700 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.700 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.701 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.701 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.702 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.702 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.703 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.703 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T10:52:23.700306) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.704 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.705 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.706 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 41811968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.706 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.707 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.709 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.710 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.710 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.711 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 2108717398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.712 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 13222286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.713 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.713 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.714 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.715 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.716 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.716 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.717 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.718 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.719 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.720 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.720 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 4864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.720 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2244 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.721 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.722 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.722 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.722 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.723 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.723 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.724 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.724 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.725 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.726 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.727 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.728 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 4801 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.729 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.729 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.730 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.733 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.733 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.733 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.735 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.736 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 4864 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.737 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.737 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.738 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: 49.13671875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.738 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.94921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.739 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.740 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.741 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T10:52:23.705954) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T10:52:23.711535) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T10:52:23.717413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T10:52:23.719960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T10:52:23.723048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T10:52:23.728415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T10:52:23.730599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T10:52:23.731875) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T10:52:23.732896) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T10:52:23.734146) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T10:52:23.735409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.745 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T10:52:23.736726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:52:23.746 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T10:52:23.738110) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:52:25 compute-0 nova_compute[189493]: 2025-12-09 10:52:25.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:52:25 compute-0 nova_compute[189493]: 2025-12-09 10:52:25.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:52:27 compute-0 nova_compute[189493]: 2025-12-09 10:52:27.119 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:27 compute-0 nova_compute[189493]: 2025-12-09 10:52:27.681 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:27 compute-0 nova_compute[189493]: 2025-12-09 10:52:27.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:52:28 compute-0 nova_compute[189493]: 2025-12-09 10:52:28.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:52:28 compute-0 nova_compute[189493]: 2025-12-09 10:52:28.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:52:28 compute-0 nova_compute[189493]: 2025-12-09 10:52:28.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 10:52:28 compute-0 podman[241528]: 2025-12-09 10:52:28.9754157 +0000 UTC m=+0.118477161 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec  9 10:52:29 compute-0 podman[203687]: time="2025-12-09T10:52:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:52:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:52:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:52:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:52:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4776 "" "Go-http-client/1.1"
Dec  9 10:52:29 compute-0 nova_compute[189493]: 2025-12-09 10:52:29.926 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:52:29 compute-0 nova_compute[189493]: 2025-12-09 10:52:29.927 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:52:29 compute-0 nova_compute[189493]: 2025-12-09 10:52:29.927 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 10:52:29 compute-0 nova_compute[189493]: 2025-12-09 10:52:29.928 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:52:31 compute-0 openstack_network_exporter[205823]: ERROR   10:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:52:31 compute-0 openstack_network_exporter[205823]: ERROR   10:52:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:52:31 compute-0 openstack_network_exporter[205823]: ERROR   10:52:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:52:31 compute-0 openstack_network_exporter[205823]: ERROR   10:52:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:52:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:52:31 compute-0 openstack_network_exporter[205823]: ERROR   10:52:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:52:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:52:32 compute-0 nova_compute[189493]: 2025-12-09 10:52:32.123 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:32 compute-0 nova_compute[189493]: 2025-12-09 10:52:32.684 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:32 compute-0 podman[241549]: 2025-12-09 10:52:32.981514331 +0000 UTC m=+0.123328276 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.240 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.342 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.343 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.343 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.344 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.344 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.345 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.497 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.499 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.499 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.500 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.603 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.681 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.683 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.760 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.762 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.865 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.867 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.965 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:52:33 compute-0 nova_compute[189493]: 2025-12-09 10:52:33.973 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.068 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.069 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.164 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.165 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.259 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.261 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.321 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.711 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.713 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5035MB free_disk=72.16301727294922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.713 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.713 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.817 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.818 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.819 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.820 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.899 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.915 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.917 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:52:34 compute-0 nova_compute[189493]: 2025-12-09 10:52:34.917 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.204s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:52:35 compute-0 nova_compute[189493]: 2025-12-09 10:52:35.415 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:52:35 compute-0 nova_compute[189493]: 2025-12-09 10:52:35.416 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:52:36 compute-0 podman[241595]: 2025-12-09 10:52:36.972659555 +0000 UTC m=+0.113806680 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  9 10:52:37 compute-0 nova_compute[189493]: 2025-12-09 10:52:37.127 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:37 compute-0 nova_compute[189493]: 2025-12-09 10:52:37.687 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:37 compute-0 podman[241618]: 2025-12-09 10:52:37.981461837 +0000 UTC m=+0.127934654 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, com.redhat.component=ubi9-container, io.openshift.expose-services=, config_id=edpm, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, architecture=x86_64, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, version=9.4, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  9 10:52:39 compute-0 podman[241637]: 2025-12-09 10:52:39.96715091 +0000 UTC m=+0.119903218 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 10:52:39 compute-0 podman[241638]: 2025-12-09 10:52:39.993658493 +0000 UTC m=+0.127830022 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec  9 10:52:42 compute-0 nova_compute[189493]: 2025-12-09 10:52:42.131 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:42 compute-0 nova_compute[189493]: 2025-12-09 10:52:42.690 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:45 compute-0 podman[241676]: 2025-12-09 10:52:45.958687477 +0000 UTC m=+0.108592907 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=edpm, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, vcs-type=git, container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  9 10:52:47 compute-0 nova_compute[189493]: 2025-12-09 10:52:47.136 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:47 compute-0 nova_compute[189493]: 2025-12-09 10:52:47.693 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:48 compute-0 podman[241698]: 2025-12-09 10:52:48.000672089 +0000 UTC m=+0.157088525 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Dec  9 10:52:50 compute-0 podman[241724]: 2025-12-09 10:52:50.968279072 +0000 UTC m=+0.121914139 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 10:52:52 compute-0 nova_compute[189493]: 2025-12-09 10:52:52.139 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:52 compute-0 nova_compute[189493]: 2025-12-09 10:52:52.698 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:57 compute-0 nova_compute[189493]: 2025-12-09 10:52:57.144 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:57 compute-0 nova_compute[189493]: 2025-12-09 10:52:57.701 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:52:59 compute-0 podman[203687]: time="2025-12-09T10:52:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:52:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:52:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:52:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:52:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Dec  9 10:52:59 compute-0 podman[241747]: 2025-12-09 10:52:59.97669851 +0000 UTC m=+0.114755835 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, managed_by=edpm_ansible, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Dec  9 10:53:01 compute-0 openstack_network_exporter[205823]: ERROR   10:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:53:01 compute-0 openstack_network_exporter[205823]: ERROR   10:53:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:53:01 compute-0 openstack_network_exporter[205823]: ERROR   10:53:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:53:01 compute-0 openstack_network_exporter[205823]: ERROR   10:53:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:53:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:53:01 compute-0 openstack_network_exporter[205823]: ERROR   10:53:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:53:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:53:02 compute-0 nova_compute[189493]: 2025-12-09 10:53:02.147 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:02 compute-0 nova_compute[189493]: 2025-12-09 10:53:02.705 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:03 compute-0 podman[241768]: 2025-12-09 10:53:03.960730622 +0000 UTC m=+0.110755342 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 10:53:07 compute-0 nova_compute[189493]: 2025-12-09 10:53:07.155 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:07 compute-0 nova_compute[189493]: 2025-12-09 10:53:07.710 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:07 compute-0 podman[241793]: 2025-12-09 10:53:07.980592546 +0000 UTC m=+0.123795648 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 10:53:08 compute-0 podman[241813]: 2025-12-09 10:53:08.987017647 +0000 UTC m=+0.128318695 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1214.1726694543, container_name=kepler, name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, architecture=x86_64, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., release-0.7.12=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  9 10:53:10 compute-0 podman[241833]: 2025-12-09 10:53:10.91175091 +0000 UTC m=+0.064363558 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec  9 10:53:10 compute-0 podman[241834]: 2025-12-09 10:53:10.948073265 +0000 UTC m=+0.097924462 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  9 10:53:12 compute-0 nova_compute[189493]: 2025-12-09 10:53:12.154 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:12 compute-0 nova_compute[189493]: 2025-12-09 10:53:12.715 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:16 compute-0 podman[241872]: 2025-12-09 10:53:16.964752359 +0000 UTC m=+0.107044406 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, build-date=2025-08-20T13:12:41, config_id=edpm, architecture=x86_64, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git)
Dec  9 10:53:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:53:16.982 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:53:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:53:16.983 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:53:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:53:16.983 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:53:17 compute-0 nova_compute[189493]: 2025-12-09 10:53:17.155 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:17 compute-0 nova_compute[189493]: 2025-12-09 10:53:17.719 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:19 compute-0 podman[241893]: 2025-12-09 10:53:19.000194644 +0000 UTC m=+0.140688014 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 10:53:21 compute-0 podman[241920]: 2025-12-09 10:53:21.940217546 +0000 UTC m=+0.096938176 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 10:53:22 compute-0 nova_compute[189493]: 2025-12-09 10:53:22.159 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:22 compute-0 nova_compute[189493]: 2025-12-09 10:53:22.723 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:25 compute-0 nova_compute[189493]: 2025-12-09 10:53:25.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:53:26 compute-0 nova_compute[189493]: 2025-12-09 10:53:26.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:53:27 compute-0 nova_compute[189493]: 2025-12-09 10:53:27.168 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:27 compute-0 nova_compute[189493]: 2025-12-09 10:53:27.727 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:27 compute-0 nova_compute[189493]: 2025-12-09 10:53:27.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:53:29 compute-0 podman[203687]: time="2025-12-09T10:53:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:53:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:53:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:53:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:53:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec  9 10:53:29 compute-0 nova_compute[189493]: 2025-12-09 10:53:29.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:53:29 compute-0 nova_compute[189493]: 2025-12-09 10:53:29.844 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:53:30 compute-0 podman[241941]: 2025-12-09 10:53:30.931599006 +0000 UTC m=+0.087223427 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=multipathd, io.buildah.version=1.41.3)
Dec  9 10:53:30 compute-0 nova_compute[189493]: 2025-12-09 10:53:30.993 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:53:30 compute-0 nova_compute[189493]: 2025-12-09 10:53:30.994 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:53:30 compute-0 nova_compute[189493]: 2025-12-09 10:53:30.994 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 10:53:31 compute-0 openstack_network_exporter[205823]: ERROR   10:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:53:31 compute-0 openstack_network_exporter[205823]: ERROR   10:53:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:53:31 compute-0 openstack_network_exporter[205823]: ERROR   10:53:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:53:31 compute-0 openstack_network_exporter[205823]: ERROR   10:53:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:53:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:53:31 compute-0 openstack_network_exporter[205823]: ERROR   10:53:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:53:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:53:32 compute-0 nova_compute[189493]: 2025-12-09 10:53:32.169 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:32 compute-0 nova_compute[189493]: 2025-12-09 10:53:32.733 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.169 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.186 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.187 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.188 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.188 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.189 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.190 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.220 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.221 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.222 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.311 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.417 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.419 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.489 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.491 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:53:34 compute-0 podman[241963]: 2025-12-09 10:53:34.502536701 +0000 UTC m=+0.148110163 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.558 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.559 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.632 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.642 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.720 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.721 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.782 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.783 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.846 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.847 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:53:34 compute-0 nova_compute[189493]: 2025-12-09 10:53:34.904 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.307 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.309 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5050MB free_disk=72.16301727294922GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.309 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.309 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.601 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.602 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.602 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.603 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.673 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.692 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.694 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:53:35 compute-0 nova_compute[189493]: 2025-12-09 10:53:35.695 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.385s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:53:36 compute-0 nova_compute[189493]: 2025-12-09 10:53:36.347 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:53:36 compute-0 nova_compute[189493]: 2025-12-09 10:53:36.383 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:53:36 compute-0 nova_compute[189493]: 2025-12-09 10:53:36.383 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:53:37 compute-0 nova_compute[189493]: 2025-12-09 10:53:37.172 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:37 compute-0 nova_compute[189493]: 2025-12-09 10:53:37.737 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:38 compute-0 podman[242008]: 2025-12-09 10:53:38.953940476 +0000 UTC m=+0.103610578 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  9 10:53:39 compute-0 podman[242028]: 2025-12-09 10:53:39.953456269 +0000 UTC m=+0.108833403 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, name=ubi9, io.buildah.version=1.29.0, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.openshift.expose-services=, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, managed_by=edpm_ansible, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, com.redhat.component=ubi9-container, config_id=edpm, release=1214.1726694543)
Dec  9 10:53:41 compute-0 podman[242048]: 2025-12-09 10:53:41.97748999 +0000 UTC m=+0.116121041 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Dec  9 10:53:41 compute-0 podman[242049]: 2025-12-09 10:53:41.977581782 +0000 UTC m=+0.115231037 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  9 10:53:42 compute-0 nova_compute[189493]: 2025-12-09 10:53:42.177 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:42 compute-0 nova_compute[189493]: 2025-12-09 10:53:42.741 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:47 compute-0 nova_compute[189493]: 2025-12-09 10:53:47.180 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:47 compute-0 nova_compute[189493]: 2025-12-09 10:53:47.747 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:48 compute-0 podman[242082]: 2025-12-09 10:53:48.000560518 +0000 UTC m=+0.133125799 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, io.openshift.expose-services=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers)
Dec  9 10:53:50 compute-0 podman[242104]: 2025-12-09 10:53:50.026589989 +0000 UTC m=+0.176858674 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  9 10:53:52 compute-0 nova_compute[189493]: 2025-12-09 10:53:52.184 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:52 compute-0 nova_compute[189493]: 2025-12-09 10:53:52.751 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:53 compute-0 podman[242131]: 2025-12-09 10:53:53.002183608 +0000 UTC m=+0.134490414 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 10:53:57 compute-0 nova_compute[189493]: 2025-12-09 10:53:57.190 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:57 compute-0 nova_compute[189493]: 2025-12-09 10:53:57.753 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:53:59 compute-0 podman[203687]: time="2025-12-09T10:53:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:53:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:53:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:53:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:53:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4778 "" "Go-http-client/1.1"
Dec  9 10:54:01 compute-0 openstack_network_exporter[205823]: ERROR   10:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:54:01 compute-0 openstack_network_exporter[205823]: ERROR   10:54:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:54:01 compute-0 openstack_network_exporter[205823]: ERROR   10:54:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:54:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:54:01 compute-0 openstack_network_exporter[205823]: ERROR   10:54:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:54:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:54:01 compute-0 openstack_network_exporter[205823]: ERROR   10:54:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:54:01 compute-0 podman[242155]: 2025-12-09 10:54:01.969862667 +0000 UTC m=+0.120786861 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec  9 10:54:02 compute-0 nova_compute[189493]: 2025-12-09 10:54:02.193 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:02 compute-0 nova_compute[189493]: 2025-12-09 10:54:02.756 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:04 compute-0 podman[242176]: 2025-12-09 10:54:04.91260148 +0000 UTC m=+0.073160814 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 10:54:07 compute-0 nova_compute[189493]: 2025-12-09 10:54:07.194 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:07 compute-0 nova_compute[189493]: 2025-12-09 10:54:07.760 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:09 compute-0 podman[242198]: 2025-12-09 10:54:09.951311536 +0000 UTC m=+0.099607426 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm)
Dec  9 10:54:10 compute-0 podman[242218]: 2025-12-09 10:54:10.936031868 +0000 UTC m=+0.084855306 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, config_id=edpm, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, distribution-scope=public)
Dec  9 10:54:12 compute-0 nova_compute[189493]: 2025-12-09 10:54:12.199 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:12 compute-0 nova_compute[189493]: 2025-12-09 10:54:12.765 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:12 compute-0 podman[242238]: 2025-12-09 10:54:12.928518595 +0000 UTC m=+0.076774047 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 10:54:12 compute-0 podman[242239]: 2025-12-09 10:54:12.98695318 +0000 UTC m=+0.126983160 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, io.buildah.version=1.41.4)
Dec  9 10:54:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:54:16.984 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:54:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:54:16.985 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:54:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:54:16.987 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:54:17 compute-0 nova_compute[189493]: 2025-12-09 10:54:17.199 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:17 compute-0 nova_compute[189493]: 2025-12-09 10:54:17.774 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:19 compute-0 podman[242273]: 2025-12-09 10:54:19.020341174 +0000 UTC m=+0.159998770 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, architecture=x86_64, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, version=9.6, distribution-scope=public, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, com.redhat.component=ubi9-minimal-container)
Dec  9 10:54:20 compute-0 podman[242295]: 2025-12-09 10:54:20.976607149 +0000 UTC m=+0.130428478 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  9 10:54:22 compute-0 nova_compute[189493]: 2025-12-09 10:54:22.204 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:22 compute-0 nova_compute[189493]: 2025-12-09 10:54:22.779 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.290 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.292 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.304 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.309 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.309 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.309 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.311 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T10:54:23.310067) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.315 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.320 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2010 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.320 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.321 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T10:54:23.321333) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.363 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.364 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.364 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.407 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.408 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.409 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.410 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.410 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.411 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.412 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 44 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T10:54:23.411897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.413 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.414 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.414 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.414 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.415 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.415 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.415 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.416 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.416 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.417 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T10:54:23.415539) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.417 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.418 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.418 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.418 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.418 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.419 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.419 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T10:54:23.418951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.542 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.543 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.544 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.670 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.671 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.672 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.673 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.674 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.674 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.674 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.675 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.675 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.675 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.676 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.677 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.678 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.678 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T10:54:23.675234) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.678 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.678 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.679 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.679 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.679 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 439593872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.680 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 92612690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.681 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T10:54:23.679439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.681 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 59905939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.682 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.682 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.682 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.683 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.684 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.684 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.684 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.684 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.684 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.685 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.685 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.686 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.686 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.687 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.687 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.688 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.688 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.689 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T10:54:23.684704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.689 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.689 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.689 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.689 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.690 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.690 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.691 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.691 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.692 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T10:54:23.689751) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.692 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.693 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.694 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.694 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.694 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.694 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.695 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.695 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T10:54:23.694993) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.735 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 188690000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.775 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 38210000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.776 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.776 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.776 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.777 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.777 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.777 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.777 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T10:54:23.777180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.778 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.778 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.779 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.779 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.780 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.781 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.781 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.781 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.781 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.782 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.782 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.782 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.783 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.783 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T10:54:23.781960) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.784 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.784 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.785 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.786 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.786 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.787 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.787 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.787 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.787 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T10:54:23.787455) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.788 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 2118298266 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.788 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 13222286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.789 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.789 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.790 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.790 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.791 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.792 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.792 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.792 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.792 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.792 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.793 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.794 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.795 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.795 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T10:54:23.792580) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.795 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.795 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.795 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.796 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 4934 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.796 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T10:54:23.795884) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.796 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.797 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.798 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.798 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.798 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.798 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.799 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.799 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.800 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.800 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.801 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.801 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.802 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T10:54:23.798682) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.803 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.804 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.804 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.804 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.804 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.805 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T10:54:23.804490) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.805 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.806 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.806 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.808 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.809 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.809 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.810 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.811 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.811 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.811 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.811 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.812 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.812 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 18 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.812 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.813 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.814 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.814 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.814 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.815 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.815 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.815 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.817 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: 49.12890625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.818 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.94921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.818 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.819 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.819 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.819 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.819 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.820 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.820 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.820 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.820 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.820 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T10:54:23.807080) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T10:54:23.809883) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T10:54:23.811689) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T10:54:23.813412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T10:54:23.815032) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T10:54:23.816342) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T10:54:23.817644) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.821 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.823 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.824 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.825 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:54:23.826 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:54:23 compute-0 podman[242322]: 2025-12-09 10:54:23.927029189 +0000 UTC m=+0.078910892 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 10:54:25 compute-0 nova_compute[189493]: 2025-12-09 10:54:25.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:26 compute-0 nova_compute[189493]: 2025-12-09 10:54:26.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:27 compute-0 nova_compute[189493]: 2025-12-09 10:54:27.209 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:27 compute-0 nova_compute[189493]: 2025-12-09 10:54:27.783 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:28 compute-0 nova_compute[189493]: 2025-12-09 10:54:28.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:29 compute-0 podman[203687]: time="2025-12-09T10:54:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:54:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:54:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:54:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:54:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4781 "" "Go-http-client/1.1"
Dec  9 10:54:29 compute-0 nova_compute[189493]: 2025-12-09 10:54:29.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:30 compute-0 nova_compute[189493]: 2025-12-09 10:54:30.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:30 compute-0 nova_compute[189493]: 2025-12-09 10:54:30.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:54:30 compute-0 nova_compute[189493]: 2025-12-09 10:54:30.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 10:54:31 compute-0 openstack_network_exporter[205823]: ERROR   10:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:54:31 compute-0 openstack_network_exporter[205823]: ERROR   10:54:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:54:31 compute-0 openstack_network_exporter[205823]: ERROR   10:54:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:54:31 compute-0 openstack_network_exporter[205823]: ERROR   10:54:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:54:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:54:31 compute-0 openstack_network_exporter[205823]: ERROR   10:54:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:54:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.007 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.008 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.009 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.010 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.210 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:32 compute-0 nova_compute[189493]: 2025-12-09 10:54:32.788 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:33 compute-0 podman[242346]: 2025-12-09 10:54:33.007836062 +0000 UTC m=+0.156169882 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3)
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.722 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.741 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.742 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.743 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.744 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.745 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.786 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.786 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.787 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.788 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.881 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.985 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:54:34 compute-0 nova_compute[189493]: 2025-12-09 10:54:34.987 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.086 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.088 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.157 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.158 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.257 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.266 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.335 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.337 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.430 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.431 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.492 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.494 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.563 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:54:35 compute-0 podman[242390]: 2025-12-09 10:54:35.920223514 +0000 UTC m=+0.076136493 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.984 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.986 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5046MB free_disk=72.16311645507812GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.986 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:54:35 compute-0 nova_compute[189493]: 2025-12-09 10:54:35.986 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.366 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.367 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.367 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.367 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.417 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.485 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.485 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.507 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.531 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.594 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.615 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.618 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.619 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.632s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.620 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.620 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.635 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.635 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.636 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  9 10:54:36 compute-0 nova_compute[189493]: 2025-12-09 10:54:36.651 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:37 compute-0 nova_compute[189493]: 2025-12-09 10:54:37.212 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:37 compute-0 nova_compute[189493]: 2025-12-09 10:54:37.757 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:54:37 compute-0 nova_compute[189493]: 2025-12-09 10:54:37.758 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:54:37 compute-0 nova_compute[189493]: 2025-12-09 10:54:37.792 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:40 compute-0 podman[242411]: 2025-12-09 10:54:40.972439053 +0000 UTC m=+0.125525276 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Dec  9 10:54:41 compute-0 podman[242431]: 2025-12-09 10:54:41.122933486 +0000 UTC m=+0.105778203 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, version=9.4, release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, maintainer=Red Hat, Inc., vcs-type=git, io.openshift.tags=base rhel9, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Dec  9 10:54:42 compute-0 nova_compute[189493]: 2025-12-09 10:54:42.215 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:42 compute-0 nova_compute[189493]: 2025-12-09 10:54:42.794 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:43 compute-0 podman[242452]: 2025-12-09 10:54:43.983584501 +0000 UTC m=+0.124186557 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 10:54:44 compute-0 podman[242451]: 2025-12-09 10:54:44.003404396 +0000 UTC m=+0.145191866 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 10:54:47 compute-0 nova_compute[189493]: 2025-12-09 10:54:47.221 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:47 compute-0 nova_compute[189493]: 2025-12-09 10:54:47.797 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:49 compute-0 podman[242489]: 2025-12-09 10:54:49.952308579 +0000 UTC m=+0.108953132 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, version=9.6)
Dec  9 10:54:52 compute-0 podman[242510]: 2025-12-09 10:54:52.048089601 +0000 UTC m=+0.182566313 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  9 10:54:52 compute-0 nova_compute[189493]: 2025-12-09 10:54:52.221 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:52 compute-0 nova_compute[189493]: 2025-12-09 10:54:52.800 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:54 compute-0 podman[242535]: 2025-12-09 10:54:54.942492041 +0000 UTC m=+0.097901282 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 10:54:57 compute-0 nova_compute[189493]: 2025-12-09 10:54:57.224 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:57 compute-0 nova_compute[189493]: 2025-12-09 10:54:57.804 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:54:59 compute-0 podman[203687]: time="2025-12-09T10:54:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:54:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:54:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:54:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:54:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4775 "" "Go-http-client/1.1"
Dec  9 10:55:01 compute-0 openstack_network_exporter[205823]: ERROR   10:55:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:55:01 compute-0 openstack_network_exporter[205823]: ERROR   10:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:55:01 compute-0 openstack_network_exporter[205823]: ERROR   10:55:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:55:01 compute-0 openstack_network_exporter[205823]: ERROR   10:55:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:55:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:55:01 compute-0 openstack_network_exporter[205823]: ERROR   10:55:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:55:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:55:02 compute-0 nova_compute[189493]: 2025-12-09 10:55:02.225 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:02 compute-0 nova_compute[189493]: 2025-12-09 10:55:02.809 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:03 compute-0 podman[242561]: 2025-12-09 10:55:03.960119473 +0000 UTC m=+0.116204634 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 10:55:06 compute-0 podman[242581]: 2025-12-09 10:55:06.963660829 +0000 UTC m=+0.114251070 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 10:55:07 compute-0 nova_compute[189493]: 2025-12-09 10:55:07.228 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:07 compute-0 nova_compute[189493]: 2025-12-09 10:55:07.813 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:11 compute-0 podman[242606]: 2025-12-09 10:55:11.972322128 +0000 UTC m=+0.108197160 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  9 10:55:11 compute-0 podman[242605]: 2025-12-09 10:55:11.989207621 +0000 UTC m=+0.125968887 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, managed_by=edpm_ansible, architecture=x86_64, container_name=kepler, io.openshift.expose-services=, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, version=9.4, config_id=edpm, release=1214.1726694543)
Dec  9 10:55:12 compute-0 nova_compute[189493]: 2025-12-09 10:55:12.233 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:12 compute-0 nova_compute[189493]: 2025-12-09 10:55:12.816 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:14 compute-0 podman[242641]: 2025-12-09 10:55:14.934458904 +0000 UTC m=+0.081801470 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  9 10:55:14 compute-0 podman[242642]: 2025-12-09 10:55:14.939896167 +0000 UTC m=+0.081298017 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  9 10:55:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:16.986 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:16.987 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:16.987 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:17 compute-0 nova_compute[189493]: 2025-12-09 10:55:17.233 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:17 compute-0 nova_compute[189493]: 2025-12-09 10:55:17.819 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:20 compute-0 podman[242676]: 2025-12-09 10:55:20.93713281 +0000 UTC m=+0.083082507 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, name=ubi9-minimal, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, maintainer=Red Hat, Inc.)
Dec  9 10:55:22 compute-0 nova_compute[189493]: 2025-12-09 10:55:22.236 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:22 compute-0 nova_compute[189493]: 2025-12-09 10:55:22.821 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:22 compute-0 podman[242696]: 2025-12-09 10:55:22.993012436 +0000 UTC m=+0.144328552 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  9 10:55:25 compute-0 podman[242722]: 2025-12-09 10:55:25.929688459 +0000 UTC m=+0.077731908 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 10:55:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:26.967 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 10:55:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:26.969 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  9 10:55:26 compute-0 nova_compute[189493]: 2025-12-09 10:55:26.973 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:27 compute-0 nova_compute[189493]: 2025-12-09 10:55:27.239 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:27 compute-0 nova_compute[189493]: 2025-12-09 10:55:27.824 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:27 compute-0 nova_compute[189493]: 2025-12-09 10:55:27.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:55:27 compute-0 nova_compute[189493]: 2025-12-09 10:55:27.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:55:29 compute-0 podman[203687]: time="2025-12-09T10:55:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:55:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:55:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:55:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:55:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4783 "" "Go-http-client/1.1"
Dec  9 10:55:29 compute-0 nova_compute[189493]: 2025-12-09 10:55:29.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:55:29 compute-0 nova_compute[189493]: 2025-12-09 10:55:29.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:55:30 compute-0 nova_compute[189493]: 2025-12-09 10:55:30.838 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:55:30 compute-0 nova_compute[189493]: 2025-12-09 10:55:30.893 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:55:30 compute-0 nova_compute[189493]: 2025-12-09 10:55:30.895 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:55:31 compute-0 openstack_network_exporter[205823]: ERROR   10:55:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:55:31 compute-0 openstack_network_exporter[205823]: ERROR   10:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:55:31 compute-0 openstack_network_exporter[205823]: ERROR   10:55:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:55:31 compute-0 openstack_network_exporter[205823]: ERROR   10:55:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:55:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:55:31 compute-0 openstack_network_exporter[205823]: ERROR   10:55:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:55:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:55:31 compute-0 nova_compute[189493]: 2025-12-09 10:55:31.550 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:55:31 compute-0 nova_compute[189493]: 2025-12-09 10:55:31.551 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:55:31 compute-0 nova_compute[189493]: 2025-12-09 10:55:31.552 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 10:55:32 compute-0 nova_compute[189493]: 2025-12-09 10:55:32.241 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:32 compute-0 nova_compute[189493]: 2025-12-09 10:55:32.829 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.167 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.170 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.173 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.202 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.203 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.204 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.205 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.210 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.307 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.309 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.324 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.325 189497 INFO nova.compute.claims [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.514 189497 DEBUG nova.compute.provider_tree [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.538 189497 DEBUG nova.scheduler.client.report [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.569 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.260s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.570 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.626 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.627 189497 DEBUG nova.network.neutron [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.656 189497 INFO nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.698 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.788 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.801 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.802 189497 INFO nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Creating image(s)#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.804 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.805 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.807 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.833 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.859 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.892 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.893 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.894 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.895 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.936 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.938 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.940 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:33 compute-0 nova_compute[189493]: 2025-12-09 10:55:33.961 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.048 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.062 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.095 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.131 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk 1073741824" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.132 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.133 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.189 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.192 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.219 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.222 189497 DEBUG nova.virt.disk.api [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking if we can resize image /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.223 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.287 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.288 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.314 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.325 189497 DEBUG nova.virt.disk.api [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Cannot resize image /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.326 189497 DEBUG nova.objects.instance [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 32dd7fb0-7003-48cc-b688-4b94946c911f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.359 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.360 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.363 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.390 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.391 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.418 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.481 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.494 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.516 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.518 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.520 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.543 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.572 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.574 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.604 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.607 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.645 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.647 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.676 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 1073741824" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.678 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.158s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.680 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.742 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.744 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.764 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.766 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.767 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Ensure instance console log exists: /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.769 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.770 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.771 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:34 compute-0 nova_compute[189493]: 2025-12-09 10:55:34.810 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:34 compute-0 podman[242800]: 2025-12-09 10:55:34.963254128 +0000 UTC m=+0.116235415 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.232 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.243 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5041MB free_disk=72.16319274902344GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.244 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.244 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.324 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.325 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.325 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.326 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.326 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.440 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.489 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.529 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:55:35 compute-0 nova_compute[189493]: 2025-12-09 10:55:35.531 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.287s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:35 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:35.973 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.759 189497 DEBUG nova.network.neutron [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Successfully updated port: d6164edf-adb9-4fa5-9e6d-bae85d8af633 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  9 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.778 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.778 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquired lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.779 189497 DEBUG nova.network.neutron [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  9 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.885 189497 DEBUG nova.compute.manager [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-changed-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.886 189497 DEBUG nova.compute.manager [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Refreshing instance network info cache due to event network-changed-d6164edf-adb9-4fa5-9e6d-bae85d8af633. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  9 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.887 189497 DEBUG oslo_concurrency.lockutils [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:55:36 compute-0 nova_compute[189493]: 2025-12-09 10:55:36.942 189497 DEBUG nova.network.neutron [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  9 10:55:37 compute-0 nova_compute[189493]: 2025-12-09 10:55:37.243 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:37 compute-0 nova_compute[189493]: 2025-12-09 10:55:37.832 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:37 compute-0 podman[242820]: 2025-12-09 10:55:37.907819702 +0000 UTC m=+0.065617087 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.514 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.514 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.774 189497 DEBUG nova.network.neutron [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.812 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Releasing lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.813 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Instance network_info: |[{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.815 189497 DEBUG oslo_concurrency.lockutils [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.816 189497 DEBUG nova.network.neutron [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Refreshing network info cache for port d6164edf-adb9-4fa5-9e6d-bae85d8af633 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.823 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Start _get_guest_xml network_info=[{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'size': 0, 'image_id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.836 189497 WARNING nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.855 189497 DEBUG nova.virt.libvirt.host [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.856 189497 DEBUG nova.virt.libvirt.host [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.868 189497 DEBUG nova.virt.libvirt.host [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.869 189497 DEBUG nova.virt.libvirt.host [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.871 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.872 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-09T10:47:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cf91b364-8467-4d1e-8c92-f7d1fab99905',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.873 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.875 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.876 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.877 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.878 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.880 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.881 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.882 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.883 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.884 189497 DEBUG nova.virt.hardware [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.892 189497 DEBUG nova.virt.libvirt.vif [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:55:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',id=3,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-8nh5c9bf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:55:33Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  9 10:55:38 compute-0 nova_compute[189493]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=32dd7fb0-7003-48cc-b688-4b94946c911f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.893 189497 DEBUG nova.network.os_vif_util [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.895 189497 DEBUG nova.network.os_vif_util [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.897 189497 DEBUG nova.objects.instance [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 32dd7fb0-7003-48cc-b688-4b94946c911f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.914 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] End _get_guest_xml xml=<domain type="kvm">
Dec  9 10:55:38 compute-0 nova_compute[189493]:  <uuid>32dd7fb0-7003-48cc-b688-4b94946c911f</uuid>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  <name>instance-00000003</name>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  <memory>524288</memory>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  <vcpu>1</vcpu>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  <metadata>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <nova:name>vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y</nova:name>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <nova:creationTime>2025-12-09 10:55:38</nova:creationTime>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <nova:flavor name="m1.small">
Dec  9 10:55:38 compute-0 nova_compute[189493]:        <nova:memory>512</nova:memory>
Dec  9 10:55:38 compute-0 nova_compute[189493]:        <nova:disk>1</nova:disk>
Dec  9 10:55:38 compute-0 nova_compute[189493]:        <nova:swap>0</nova:swap>
Dec  9 10:55:38 compute-0 nova_compute[189493]:        <nova:ephemeral>1</nova:ephemeral>
Dec  9 10:55:38 compute-0 nova_compute[189493]:        <nova:vcpus>1</nova:vcpus>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      </nova:flavor>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <nova:owner>
Dec  9 10:55:38 compute-0 nova_compute[189493]:        <nova:user uuid="e6d3a937c2a74eb0816d9f63820935e0">admin</nova:user>
Dec  9 10:55:38 compute-0 nova_compute[189493]:        <nova:project uuid="736bbfddbeea47e3ac9d863ba120b8f2">admin</nova:project>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      </nova:owner>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <nova:root type="image" uuid="53d12211-5d5c-4333-b3ee-e3dcf1663767"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <nova:ports>
Dec  9 10:55:38 compute-0 nova_compute[189493]:        <nova:port uuid="d6164edf-adb9-4fa5-9e6d-bae85d8af633">
Dec  9 10:55:38 compute-0 nova_compute[189493]:          <nova:ip type="fixed" address="192.168.0.98" ipVersion="4"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:        </nova:port>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      </nova:ports>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    </nova:instance>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  </metadata>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  <sysinfo type="smbios">
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <system>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <entry name="manufacturer">RDO</entry>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <entry name="product">OpenStack Compute</entry>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <entry name="serial">32dd7fb0-7003-48cc-b688-4b94946c911f</entry>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <entry name="uuid">32dd7fb0-7003-48cc-b688-4b94946c911f</entry>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <entry name="family">Virtual Machine</entry>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    </system>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  </sysinfo>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  <os>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <boot dev="hd"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <smbios mode="sysinfo"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  </os>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  <features>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <acpi/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <apic/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <vmcoreinfo/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  </features>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  <clock offset="utc">
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <timer name="pit" tickpolicy="delay"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <timer name="hpet" present="no"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  </clock>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  <cpu mode="host-model" match="exact">
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <topology sockets="1" cores="1" threads="1"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  </cpu>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  <devices>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <disk type="file" device="disk">
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <target dev="vda" bus="virtio"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <disk type="file" device="disk">
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <target dev="vdb" bus="virtio"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <disk type="file" device="cdrom">
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <driver name="qemu" type="raw" cache="none"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.config"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <target dev="sda" bus="sata"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <interface type="ethernet">
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <mac address="fa:16:3e:83:9f:5d"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <model type="virtio"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <driver name="vhost" rx_queue_size="512"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <mtu size="1442"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <target dev="tapd6164edf-ad"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    </interface>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <serial type="pty">
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <log file="/var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/console.log" append="off"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    </serial>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <video>
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <model type="virtio"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    </video>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <input type="tablet" bus="usb"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <rng model="virtio">
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <backend model="random">/dev/urandom</backend>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    </rng>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <controller type="usb" index="0"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    <memballoon model="virtio">
Dec  9 10:55:38 compute-0 nova_compute[189493]:      <stats period="10"/>
Dec  9 10:55:38 compute-0 nova_compute[189493]:    </memballoon>
Dec  9 10:55:38 compute-0 nova_compute[189493]:  </devices>
Dec  9 10:55:38 compute-0 nova_compute[189493]: </domain>
Dec  9 10:55:38 compute-0 nova_compute[189493]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.930 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Preparing to wait for external event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.931 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.932 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.933 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.935 189497 DEBUG nova.virt.libvirt.vif [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:55:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',id=3,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-8nh5c9bf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:55:33Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  9 10:55:38 compute-0 nova_compute[189493]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=32dd7fb0-7003-48cc-b688-4b94946c911f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.935 189497 DEBUG nova.network.os_vif_util [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.937 189497 DEBUG nova.network.os_vif_util [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.938 189497 DEBUG os_vif [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.940 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.940 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.942 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.965 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.966 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd6164edf-ad, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.968 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd6164edf-ad, col_values=(('external_ids', {'iface-id': 'd6164edf-adb9-4fa5-9e6d-bae85d8af633', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:83:9f:5d', 'vm-uuid': '32dd7fb0-7003-48cc-b688-4b94946c911f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.971 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:38 compute-0 NetworkManager[56302]: <info>  [1765277738.9745] manager: (tapd6164edf-ad): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.980 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  9 10:55:38 compute-0 nova_compute[189493]: 2025-12-09 10:55:38.993 189497 INFO os_vif [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad')#033[00m
Dec  9 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.081 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.082 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:55:39 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 10:55:38.892 189497 DEBUG nova.virt.libvirt.vif [None req-7a1e6ff3-fa [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.083 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.083 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No VIF found with MAC fa:16:3e:83:9f:5d, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  9 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.085 189497 INFO nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Using config drive#033[00m
Dec  9 10:55:39 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 10:55:38.935 189497 DEBUG nova.virt.libvirt.vif [None req-7a1e6ff3-fa [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 10:55:39 compute-0 nova_compute[189493]: 2025-12-09 10:55:39.988 189497 INFO nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Creating config drive at /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.config#033[00m
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.003 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpud_2su99 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.139 189497 DEBUG oslo_concurrency.processutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpud_2su99" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:55:40 compute-0 kernel: tapd6164edf-ad: entered promiscuous mode
Dec  9 10:55:40 compute-0 ovn_controller[97780]: 2025-12-09T10:55:40Z|00040|binding|INFO|Claiming lport d6164edf-adb9-4fa5-9e6d-bae85d8af633 for this chassis.
Dec  9 10:55:40 compute-0 ovn_controller[97780]: 2025-12-09T10:55:40Z|00041|binding|INFO|d6164edf-adb9-4fa5-9e6d-bae85d8af633: Claiming fa:16:3e:83:9f:5d 192.168.0.98
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.251 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.258 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:9f:5d 192.168.0.98'], port_security=['fa:16:3e:83:9f:5d 192.168.0.98'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-fel25ona52mn-zi55qxbdeak4-port-7xvtkga34xqd', 'neutron:cidrs': '192.168.0.98/24', 'neutron:device_id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-fel25ona52mn-zi55qxbdeak4-port-7xvtkga34xqd', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=d6164edf-adb9-4fa5-9e6d-bae85d8af633) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.259 106644 INFO neutron.agent.ovn.metadata.agent [-] Port d6164edf-adb9-4fa5-9e6d-bae85d8af633 in datapath c5af7354-5afe-400a-9e13-5500648117d8 bound to our chassis#033[00m
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.261 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8#033[00m
Dec  9 10:55:40 compute-0 NetworkManager[56302]: <info>  [1765277740.2689] manager: (tapd6164edf-ad): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Dec  9 10:55:40 compute-0 ovn_controller[97780]: 2025-12-09T10:55:40Z|00042|binding|INFO|Setting lport d6164edf-adb9-4fa5-9e6d-bae85d8af633 ovn-installed in OVS
Dec  9 10:55:40 compute-0 ovn_controller[97780]: 2025-12-09T10:55:40Z|00043|binding|INFO|Setting lport d6164edf-adb9-4fa5-9e6d-bae85d8af633 up in Southbound
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.272 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.275 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.284 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[ebd10290-0d9b-40ce-b443-2684353b0bb5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:55:40 compute-0 systemd-machined[155790]: New machine qemu-3-instance-00000003.
Dec  9 10:55:40 compute-0 systemd-udevd[242866]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 10:55:40 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.328 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[6c5a629f-a943-42c9-a2e7-92ec02402697]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:55:40 compute-0 NetworkManager[56302]: <info>  [1765277740.3310] device (tapd6164edf-ad): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.331 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[fa768375-0040-49c4-bf84-17be1f1bfeb2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:55:40 compute-0 NetworkManager[56302]: <info>  [1765277740.3432] device (tapd6164edf-ad): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.365 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[86a9390e-830f-4760-b239-f92dcd518627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.388 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad11e07-4e31-435a-82c8-7a4ddbd68c09]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 9, 'tx_packets': 7, 'rx_bytes': 658, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 33701, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242875, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.410 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[0040de22-326d-4d0c-97fa-6f122e283db4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242878, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242878, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.413 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.416 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.419 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.420 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.421 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.421 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:55:40 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:55:40.422 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 10:55:40 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  9 10:55:40 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.942 189497 DEBUG nova.compute.manager [req-0c3631f5-0cce-49c4-952f-e021d46461be req-32dbe9cf-8271-4294-be16-9094239f64c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.943 189497 DEBUG oslo_concurrency.lockutils [req-0c3631f5-0cce-49c4-952f-e021d46461be req-32dbe9cf-8271-4294-be16-9094239f64c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.943 189497 DEBUG oslo_concurrency.lockutils [req-0c3631f5-0cce-49c4-952f-e021d46461be req-32dbe9cf-8271-4294-be16-9094239f64c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.943 189497 DEBUG oslo_concurrency.lockutils [req-0c3631f5-0cce-49c4-952f-e021d46461be req-32dbe9cf-8271-4294-be16-9094239f64c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:40 compute-0 nova_compute[189493]: 2025-12-09 10:55:40.943 189497 DEBUG nova.compute.manager [req-0c3631f5-0cce-49c4-952f-e021d46461be req-32dbe9cf-8271-4294-be16-9094239f64c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Processing event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.019 189497 DEBUG nova.network.neutron [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updated VIF entry in instance network info cache for port d6164edf-adb9-4fa5-9e6d-bae85d8af633. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.020 189497 DEBUG nova.network.neutron [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.044 189497 DEBUG oslo_concurrency.lockutils [req-0ab7bb34-8f2c-41f0-8bf9-ada69ced9192 req-7ed35e7a-1154-4696-b477-9aa4bd2ff162 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.100 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.101 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277741.1012912, 32dd7fb0-7003-48cc-b688-4b94946c911f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.102 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] VM Started (Lifecycle Event)#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.109 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.115 189497 INFO nova.virt.libvirt.driver [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Instance spawned successfully.#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.115 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.129 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.140 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.147 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.147 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.148 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.148 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.149 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.149 189497 DEBUG nova.virt.libvirt.driver [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.161 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.161 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277741.1014433, 32dd7fb0-7003-48cc-b688-4b94946c911f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.161 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] VM Paused (Lifecycle Event)#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.186 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.205 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277741.106694, 32dd7fb0-7003-48cc-b688-4b94946c911f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.205 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] VM Resumed (Lifecycle Event)#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.689 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.694 189497 INFO nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Took 7.89 seconds to spawn the instance on the hypervisor.#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.695 189497 DEBUG nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.701 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.736 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.777 189497 INFO nova.compute.manager [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Took 8.51 seconds to build instance.#033[00m
Dec  9 10:55:41 compute-0 nova_compute[189493]: 2025-12-09 10:55:41.800 189497 DEBUG oslo_concurrency.lockutils [None req-7a1e6ff3-fa41-4e5f-a75e-0ef70d2ddd09 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.631s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:42 compute-0 nova_compute[189493]: 2025-12-09 10:55:42.248 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:42 compute-0 podman[242906]: 2025-12-09 10:55:42.964945559 +0000 UTC m=+0.103106028 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, build-date=2024-09-18T21:23:30, name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_id=edpm, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, architecture=x86_64)
Dec  9 10:55:42 compute-0 podman[242907]: 2025-12-09 10:55:42.992180981 +0000 UTC m=+0.123137918 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  9 10:55:43 compute-0 nova_compute[189493]: 2025-12-09 10:55:43.973 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.127 189497 DEBUG nova.compute.manager [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.130 189497 DEBUG oslo_concurrency.lockutils [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.131 189497 DEBUG oslo_concurrency.lockutils [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.132 189497 DEBUG oslo_concurrency.lockutils [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.134 189497 DEBUG nova.compute.manager [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] No waiting events found dispatching network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 10:55:45 compute-0 nova_compute[189493]: 2025-12-09 10:55:45.135 189497 WARNING nova.compute.manager [req-01bada05-27ef-4a0c-85d3-3f88381ed5c5 req-b2d3368f-9dd2-4c90-945e-5385778c9214 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received unexpected event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 for instance with vm_state active and task_state None.#033[00m
Dec  9 10:55:45 compute-0 podman[242941]: 2025-12-09 10:55:45.932333362 +0000 UTC m=+0.090196116 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec  9 10:55:45 compute-0 podman[242942]: 2025-12-09 10:55:45.976160189 +0000 UTC m=+0.119265079 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  9 10:55:47 compute-0 nova_compute[189493]: 2025-12-09 10:55:47.250 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:48 compute-0 nova_compute[189493]: 2025-12-09 10:55:48.978 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:51 compute-0 podman[242975]: 2025-12-09 10:55:51.96110504 +0000 UTC m=+0.113594081 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, name=ubi9-minimal, config_id=edpm, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  9 10:55:52 compute-0 nova_compute[189493]: 2025-12-09 10:55:52.254 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:53 compute-0 nova_compute[189493]: 2025-12-09 10:55:53.981 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:54 compute-0 podman[242997]: 2025-12-09 10:55:54.012620643 +0000 UTC m=+0.163525008 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 10:55:56 compute-0 podman[243023]: 2025-12-09 10:55:56.927850557 +0000 UTC m=+0.078496139 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 10:55:57 compute-0 nova_compute[189493]: 2025-12-09 10:55:57.256 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:58 compute-0 nova_compute[189493]: 2025-12-09 10:55:58.986 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:55:59 compute-0 podman[203687]: time="2025-12-09T10:55:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:55:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:55:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:55:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:55:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4771 "" "Go-http-client/1.1"
Dec  9 10:56:01 compute-0 openstack_network_exporter[205823]: ERROR   10:56:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:56:01 compute-0 openstack_network_exporter[205823]: ERROR   10:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:56:01 compute-0 openstack_network_exporter[205823]: ERROR   10:56:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:56:01 compute-0 openstack_network_exporter[205823]: ERROR   10:56:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:56:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:56:01 compute-0 openstack_network_exporter[205823]: ERROR   10:56:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:56:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:56:02 compute-0 nova_compute[189493]: 2025-12-09 10:56:02.258 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:03 compute-0 nova_compute[189493]: 2025-12-09 10:56:03.993 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:05 compute-0 podman[243049]: 2025-12-09 10:56:05.971328443 +0000 UTC m=+0.117514411 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  9 10:56:07 compute-0 nova_compute[189493]: 2025-12-09 10:56:07.260 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:08 compute-0 podman[243067]: 2025-12-09 10:56:08.927104971 +0000 UTC m=+0.077827110 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 10:56:09 compute-0 nova_compute[189493]: 2025-12-09 10:56:08.999 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:10 compute-0 ovn_controller[97780]: 2025-12-09T10:56:10Z|00044|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  9 10:56:12 compute-0 ovn_controller[97780]: 2025-12-09T10:56:12Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:83:9f:5d 192.168.0.98
Dec  9 10:56:12 compute-0 ovn_controller[97780]: 2025-12-09T10:56:12Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:83:9f:5d 192.168.0.98
Dec  9 10:56:12 compute-0 nova_compute[189493]: 2025-12-09 10:56:12.263 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:13 compute-0 podman[243106]: 2025-12-09 10:56:13.939589407 +0000 UTC m=+0.076641206 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  9 10:56:13 compute-0 podman[243105]: 2025-12-09 10:56:13.939854035 +0000 UTC m=+0.076661267 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, config_id=edpm)
Dec  9 10:56:14 compute-0 nova_compute[189493]: 2025-12-09 10:56:14.005 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:16 compute-0 podman[243144]: 2025-12-09 10:56:16.961481817 +0000 UTC m=+0.114810395 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 10:56:16 compute-0 podman[243143]: 2025-12-09 10:56:16.982855025 +0000 UTC m=+0.126101702 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  9 10:56:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:56:16.988 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:56:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:56:16.989 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:56:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:56:16.989 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:56:17 compute-0 nova_compute[189493]: 2025-12-09 10:56:17.266 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:19 compute-0 nova_compute[189493]: 2025-12-09 10:56:19.008 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:22 compute-0 nova_compute[189493]: 2025-12-09 10:56:22.269 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:22 compute-0 podman[243182]: 2025-12-09 10:56:22.958094024 +0000 UTC m=+0.100034381 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.291 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.292 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.292 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a7984dbb0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.307 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 32dd7fb0-7003-48cc-b688-4b94946c911f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  9 10:56:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:23.309 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/32dd7fb0-7003-48cc-b688-4b94946c911f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}c39d506960fbc5044d0bc54d9594567a78a3d14170701e46780a30eef7979125" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  9 10:56:24 compute-0 nova_compute[189493]: 2025-12-09 10:56:24.013 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.361 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Tue, 09 Dec 2025 10:56:23 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-caa63768-3dc6-4cde-91b7-11daee944012 x-openstack-request-id: req-caa63768-3dc6-4cde-91b7-11daee944012 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.362 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "32dd7fb0-7003-48cc-b688-4b94946c911f", "name": "vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y", "status": "ACTIVE", "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "user_id": "e6d3a937c2a74eb0816d9f63820935e0", "metadata": {"metering.server_group": "24f6e5b2-dd43-46f1-87a4-e2efc1300914"}, "hostId": "17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee", "image": {"id": "53d12211-5d5c-4333-b3ee-e3dcf1663767", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/53d12211-5d5c-4333-b3ee-e3dcf1663767"}]}, "flavor": {"id": "cf91b364-8467-4d1e-8c92-f7d1fab99905", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cf91b364-8467-4d1e-8c92-f7d1fab99905"}]}, "created": "2025-12-09T10:55:31Z", "updated": "2025-12-09T10:55:41Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.98", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:83:9f:5d"}, {"version": 4, "addr": "192.168.122.244", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:83:9f:5d"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/32dd7fb0-7003-48cc-b688-4b94946c911f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/32dd7fb0-7003-48cc-b688-4b94946c911f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-09T10:55:41.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.363 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/32dd7fb0-7003-48cc-b688-4b94946c911f used request id req-caa63768-3dc6-4cde-91b7-11daee944012 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.366 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'name': 'vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.373 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.380 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.380 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.381 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.381 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.381 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.383 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T10:56:24.381624) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.390 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 32dd7fb0-7003-48cc-b688-4b94946c911f / tapd6164edf-ad inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.390 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.400 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 4975 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.407 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2094 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.408 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.409 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T10:56:24.408868) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.449 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.450 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.451 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.483 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.484 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.485 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.516 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.516 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.517 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.517 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.517 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.518 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T10:56:24.518438) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.519 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 45 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.519 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.519 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T10:56:24.520447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.520 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.521 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.521 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.521 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.521 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.522 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.522 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.522 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.522 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.522 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T10:56:24.522408) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.594 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.595 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.596 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.677 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.677 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.678 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.807 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.808 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.809 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.809 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.809 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.809 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T10:56:24.810076) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.810 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.811 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.811 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.811 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.812 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 386883662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.812 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 91523197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.812 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 560654086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.812 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T10:56:24.812010) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 439593872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 92612690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 59905939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.813 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.814 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.814 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.815 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.816 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.816 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.817 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.818 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.818 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.818 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.818 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.818 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.819 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.819 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.819 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T10:56:24.815184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T10:56:24.817733) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T10:56:24.820461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.842 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/cpu volume: 29770000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.864 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 309580000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.894 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 40010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.895 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.896 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.896 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.896 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.896 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.897 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.897 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.897 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.897 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T10:56:24.895684) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T10:56:24.898823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 41836544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.899 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.900 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.900 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.900 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.900 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.901 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 1654583151 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.902 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 9651641 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.902 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T10:56:24.901788) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.902 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.902 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 2118298266 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.902 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 13222286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.903 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.903 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.903 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.903 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.904 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.905 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.906 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.906 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T10:56:24.904658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.906 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes volume: 1751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.906 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 5004 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.906 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T10:56:24.906149) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.907 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.908 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.908 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.908 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 236 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.908 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.909 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T10:56:24.907670) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.909 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.909 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.909 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.909 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.910 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.911 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.912 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.912 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.913 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.913 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.913 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y>]
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.914 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.914 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.914 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.914 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T10:56:24.910589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T10:56:24.911736) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-09T10:56:24.913181) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T10:56:24.914753) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.917 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.917 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 34 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.917 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T10:56:24.916988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.918 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.919 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.919 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.920 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.921 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T10:56:24.918547) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T10:56:24.920349) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T10:56:24.921851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.922 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/memory.usage volume: 49.5390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.923 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: 49.16015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.924 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.925 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.925 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y>]
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T10:56:24.923379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.925 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-09T10:56:24.924862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.925 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.926 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:56:24.927 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:56:24 compute-0 podman[243204]: 2025-12-09 10:56:24.978307372 +0000 UTC m=+0.132760058 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  9 10:56:27 compute-0 nova_compute[189493]: 2025-12-09 10:56:27.274 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:27 compute-0 nova_compute[189493]: 2025-12-09 10:56:27.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:56:27 compute-0 podman[243231]: 2025-12-09 10:56:27.963798351 +0000 UTC m=+0.100339150 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 10:56:29 compute-0 nova_compute[189493]: 2025-12-09 10:56:29.018 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:29 compute-0 podman[203687]: time="2025-12-09T10:56:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:56:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:56:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:56:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:56:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec  9 10:56:29 compute-0 nova_compute[189493]: 2025-12-09 10:56:29.845 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:56:30 compute-0 nova_compute[189493]: 2025-12-09 10:56:30.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:56:30 compute-0 nova_compute[189493]: 2025-12-09 10:56:30.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:56:31 compute-0 openstack_network_exporter[205823]: ERROR   10:56:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:56:31 compute-0 openstack_network_exporter[205823]: ERROR   10:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:56:31 compute-0 openstack_network_exporter[205823]: ERROR   10:56:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:56:31 compute-0 openstack_network_exporter[205823]: ERROR   10:56:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:56:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:56:31 compute-0 openstack_network_exporter[205823]: ERROR   10:56:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:56:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:56:31 compute-0 nova_compute[189493]: 2025-12-09 10:56:31.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:56:31 compute-0 nova_compute[189493]: 2025-12-09 10:56:31.846 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:56:31 compute-0 nova_compute[189493]: 2025-12-09 10:56:31.847 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 10:56:32 compute-0 nova_compute[189493]: 2025-12-09 10:56:32.276 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:32 compute-0 nova_compute[189493]: 2025-12-09 10:56:32.422 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:56:32 compute-0 nova_compute[189493]: 2025-12-09 10:56:32.423 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:56:32 compute-0 nova_compute[189493]: 2025-12-09 10:56:32.424 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 10:56:32 compute-0 nova_compute[189493]: 2025-12-09 10:56:32.425 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:56:34 compute-0 nova_compute[189493]: 2025-12-09 10:56:34.023 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.414 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.438 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.439 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.439 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.440 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.440 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.485 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.486 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.486 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.487 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.598 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:36 compute-0 podman[243254]: 2025-12-09 10:56:36.65514709 +0000 UTC m=+0.107813480 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.706 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.709 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.770 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.771 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.860 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.862 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.964 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:36 compute-0 nova_compute[189493]: 2025-12-09 10:56:36.981 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.058 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.059 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.145 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.148 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.247 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.260 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.291 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.356 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.363 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.436 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.439 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.517 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.519 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.580 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.581 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:56:37 compute-0 nova_compute[189493]: 2025-12-09 10:56:37.636 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.020 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.022 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4867MB free_disk=72.14188766479492GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.022 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.023 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.173 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.174 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.175 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.175 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.176 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.264 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.277 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.307 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:56:38 compute-0 nova_compute[189493]: 2025-12-09 10:56:38.307 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:56:39 compute-0 nova_compute[189493]: 2025-12-09 10:56:39.028 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:39 compute-0 nova_compute[189493]: 2025-12-09 10:56:39.709 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:56:39 compute-0 nova_compute[189493]: 2025-12-09 10:56:39.710 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:56:39 compute-0 podman[243311]: 2025-12-09 10:56:39.908451367 +0000 UTC m=+0.058960741 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 10:56:42 compute-0 nova_compute[189493]: 2025-12-09 10:56:42.282 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:44 compute-0 nova_compute[189493]: 2025-12-09 10:56:44.033 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:44 compute-0 podman[243335]: 2025-12-09 10:56:44.78469584 +0000 UTC m=+0.107219162 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, config_id=edpm, container_name=kepler, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., version=9.4, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30)
Dec  9 10:56:44 compute-0 podman[243336]: 2025-12-09 10:56:44.810783663 +0000 UTC m=+0.122063396 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Dec  9 10:56:47 compute-0 nova_compute[189493]: 2025-12-09 10:56:47.285 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:47 compute-0 podman[243374]: 2025-12-09 10:56:47.960036207 +0000 UTC m=+0.097203065 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  9 10:56:47 compute-0 podman[243373]: 2025-12-09 10:56:47.968695547 +0000 UTC m=+0.117365052 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent)
Dec  9 10:56:49 compute-0 nova_compute[189493]: 2025-12-09 10:56:49.036 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:52 compute-0 nova_compute[189493]: 2025-12-09 10:56:52.287 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:53 compute-0 podman[243412]: 2025-12-09 10:56:53.916436 +0000 UTC m=+0.072191500 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, distribution-scope=public, vendor=Red Hat, Inc., name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git)
Dec  9 10:56:54 compute-0 nova_compute[189493]: 2025-12-09 10:56:54.040 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:55 compute-0 podman[243432]: 2025-12-09 10:56:55.949677974 +0000 UTC m=+0.103580465 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  9 10:56:57 compute-0 nova_compute[189493]: 2025-12-09 10:56:57.289 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:58 compute-0 podman[243460]: 2025-12-09 10:56:58.343095864 +0000 UTC m=+0.064854025 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 10:56:59 compute-0 nova_compute[189493]: 2025-12-09 10:56:59.043 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:56:59 compute-0 podman[203687]: time="2025-12-09T10:56:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:56:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:56:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:56:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:56:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4796 "" "Go-http-client/1.1"
Dec  9 10:57:01 compute-0 openstack_network_exporter[205823]: ERROR   10:57:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:57:01 compute-0 openstack_network_exporter[205823]: ERROR   10:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:57:01 compute-0 openstack_network_exporter[205823]: ERROR   10:57:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:57:01 compute-0 openstack_network_exporter[205823]: ERROR   10:57:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:57:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:57:01 compute-0 openstack_network_exporter[205823]: ERROR   10:57:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:57:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:57:02 compute-0 nova_compute[189493]: 2025-12-09 10:57:02.291 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:04 compute-0 nova_compute[189493]: 2025-12-09 10:57:04.048 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:06 compute-0 podman[243485]: 2025-12-09 10:57:06.927154944 +0000 UTC m=+0.079727631 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible)
Dec  9 10:57:07 compute-0 nova_compute[189493]: 2025-12-09 10:57:07.293 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:09 compute-0 nova_compute[189493]: 2025-12-09 10:57:09.052 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:10 compute-0 podman[243504]: 2025-12-09 10:57:10.938176119 +0000 UTC m=+0.079120555 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 10:57:12 compute-0 nova_compute[189493]: 2025-12-09 10:57:12.296 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:14 compute-0 nova_compute[189493]: 2025-12-09 10:57:14.057 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:14 compute-0 podman[243527]: 2025-12-09 10:57:14.921341293 +0000 UTC m=+0.077803340 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, distribution-scope=public, io.openshift.expose-services=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, version=9.4, com.redhat.component=ubi9-container, config_id=edpm)
Dec  9 10:57:14 compute-0 podman[243528]: 2025-12-09 10:57:14.938661783 +0000 UTC m=+0.092313365 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.license=GPLv2, tcib_managed=true)
Dec  9 10:57:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:16.989 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:16.990 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:16.990 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:17 compute-0 nova_compute[189493]: 2025-12-09 10:57:17.299 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:18 compute-0 podman[243565]: 2025-12-09 10:57:18.943042441 +0000 UTC m=+0.077689407 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.name=CentOS Stream 10 Base Image)
Dec  9 10:57:18 compute-0 podman[243564]: 2025-12-09 10:57:18.965372564 +0000 UTC m=+0.105533197 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ovn_metadata_agent)
Dec  9 10:57:19 compute-0 nova_compute[189493]: 2025-12-09 10:57:19.062 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:22 compute-0 nova_compute[189493]: 2025-12-09 10:57:22.301 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:24 compute-0 nova_compute[189493]: 2025-12-09 10:57:24.068 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:24 compute-0 podman[243602]: 2025-12-09 10:57:24.942524069 +0000 UTC m=+0.089105560 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.buildah.version=1.33.7, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, architecture=x86_64, config_id=edpm, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc.)
Dec  9 10:57:26 compute-0 podman[243623]: 2025-12-09 10:57:26.94630223 +0000 UTC m=+0.098696155 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec  9 10:57:27 compute-0 nova_compute[189493]: 2025-12-09 10:57:27.302 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:28 compute-0 nova_compute[189493]: 2025-12-09 10:57:28.796 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:28 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:28.793 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 10:57:28 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:28.794 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  9 10:57:28 compute-0 podman[243652]: 2025-12-09 10:57:28.919897739 +0000 UTC m=+0.073895976 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 10:57:29 compute-0 nova_compute[189493]: 2025-12-09 10:57:29.069 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:29 compute-0 podman[203687]: time="2025-12-09T10:57:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:57:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:57:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:57:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:57:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4794 "" "Go-http-client/1.1"
Dec  9 10:57:29 compute-0 nova_compute[189493]: 2025-12-09 10:57:29.838 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:57:29 compute-0 nova_compute[189493]: 2025-12-09 10:57:29.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:57:31 compute-0 openstack_network_exporter[205823]: ERROR   10:57:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:57:31 compute-0 openstack_network_exporter[205823]: ERROR   10:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:57:31 compute-0 openstack_network_exporter[205823]: ERROR   10:57:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:57:31 compute-0 openstack_network_exporter[205823]: ERROR   10:57:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:57:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:57:31 compute-0 openstack_network_exporter[205823]: ERROR   10:57:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:57:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:57:31 compute-0 nova_compute[189493]: 2025-12-09 10:57:31.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:57:31 compute-0 nova_compute[189493]: 2025-12-09 10:57:31.880 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:57:32 compute-0 nova_compute[189493]: 2025-12-09 10:57:32.305 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:32 compute-0 nova_compute[189493]: 2025-12-09 10:57:32.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:57:32 compute-0 nova_compute[189493]: 2025-12-09 10:57:32.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:57:33 compute-0 nova_compute[189493]: 2025-12-09 10:57:33.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:57:33 compute-0 nova_compute[189493]: 2025-12-09 10:57:33.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:57:34 compute-0 nova_compute[189493]: 2025-12-09 10:57:34.074 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:34 compute-0 nova_compute[189493]: 2025-12-09 10:57:34.394 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:57:34 compute-0 nova_compute[189493]: 2025-12-09 10:57:34.395 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:57:34 compute-0 nova_compute[189493]: 2025-12-09 10:57:34.395 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.235 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.238 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.274 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.389 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.390 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.406 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.407 189497 INFO nova.compute.claims [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.631 189497 DEBUG nova.compute.provider_tree [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.646 189497 DEBUG nova.scheduler.client.report [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.661 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.675 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.285s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.677 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.685 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.686 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.688 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.689 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.728 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.729 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.730 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.731 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.763 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.764 189497 DEBUG nova.network.neutron [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.788 189497 INFO nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.844 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.875 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.961 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:35 compute-0 nova_compute[189493]: 2025-12-09 10:57:35.963 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.034 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.036 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.070 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.074 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.076 189497 INFO nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Creating image(s)#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.078 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.079 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.081 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.110 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.133 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.135 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.167 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.169 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.170 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.186 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.217 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.227 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.244 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.246 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.284 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.286 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.310 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798,backing_fmt=raw /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk 1073741824" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.312 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "9e23edb89d785ecc8dd3ccb4d60aa458ce75a798" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.312 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.382 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/9e23edb89d785ecc8dd3ccb4d60aa458ce75a798 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.385 189497 DEBUG nova.virt.disk.api [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking if we can resize image /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.386 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.408 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.122s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.411 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.456 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.459 189497 DEBUG nova.virt.disk.api [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Cannot resize image /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.460 189497 DEBUG nova.objects.instance [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.486 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.488 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.490 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.521 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.521 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.541 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.589 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.591 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.592 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.609 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.633 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.642 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.685 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.688 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.724 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.726 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.760 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 1073741824" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.762 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.170s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.763 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.803 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.804 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.845 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.847 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.847 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Ensure instance console log exists: /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.848 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.849 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.850 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.863 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.864 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.898 189497 DEBUG nova.network.neutron [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Successfully updated port: b903bb84-e176-4730-b223-613a9b01712b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.914 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.915 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.916 189497 DEBUG nova.network.neutron [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.943 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.990 189497 DEBUG nova.compute.manager [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-changed-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.991 189497 DEBUG nova.compute.manager [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Refreshing instance network info cache due to event network-changed-b903bb84-e176-4730-b223-613a9b01712b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  9 10:57:36 compute-0 nova_compute[189493]: 2025-12-09 10:57:36.992 189497 DEBUG oslo_concurrency.lockutils [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.039 189497 DEBUG nova.network.neutron [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.306 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.464 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.465 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4860MB free_disk=72.14183044433594GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.466 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.467 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.547 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.548 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.548 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.549 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.550 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.551 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.658 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.672 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.698 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.699 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.232s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.782 189497 DEBUG nova.network.neutron [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:57:37 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:37.796 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.805 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.806 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Instance network_info: |[{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.808 189497 DEBUG oslo_concurrency.lockutils [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.809 189497 DEBUG nova.network.neutron [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Refreshing network info cache for port b903bb84-e176-4730-b223-613a9b01712b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.816 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Start _get_guest_xml network_info=[{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'size': 0, 'image_id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.830 189497 WARNING nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.851 189497 DEBUG nova.virt.libvirt.host [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.855 189497 DEBUG nova.virt.libvirt.host [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.862 189497 DEBUG nova.virt.libvirt.host [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.862 189497 DEBUG nova.virt.libvirt.host [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.863 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.864 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-09T10:47:21Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='cf91b364-8467-4d1e-8c92-f7d1fab99905',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T10:47:15Z,direct_url=<?>,disk_format='qcow2',id=53d12211-5d5c-4333-b3ee-e3dcf1663767,min_disk=0,min_ram=0,name='cirros',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T10:47:17Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.864 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.865 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.865 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.868 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.869 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.870 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.872 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.873 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.874 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.875 189497 DEBUG nova.virt.hardware [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.883 189497 DEBUG nova.virt.libvirt.vif [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:57:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',id=4,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-d2fjtx7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:57:35Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTA0MDE3NDY2MzAyNTc2ODM2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjI
Dec  9 10:57:37 compute-0 nova_compute[189493]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTA0MDE3NDY2MzAyNTc2ODM2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=7b43ca09-ed65-4465-9fcc-95caa6dc9a88,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.884 189497 DEBUG nova.network.os_vif_util [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.885 189497 DEBUG nova.network.os_vif_util [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.887 189497 DEBUG nova.objects.instance [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.900 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] End _get_guest_xml xml=<domain type="kvm">
Dec  9 10:57:37 compute-0 nova_compute[189493]:  <uuid>7b43ca09-ed65-4465-9fcc-95caa6dc9a88</uuid>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  <name>instance-00000004</name>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  <memory>524288</memory>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  <vcpu>1</vcpu>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  <metadata>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <nova:name>vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq</nova:name>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <nova:creationTime>2025-12-09 10:57:37</nova:creationTime>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <nova:flavor name="m1.small">
Dec  9 10:57:37 compute-0 nova_compute[189493]:        <nova:memory>512</nova:memory>
Dec  9 10:57:37 compute-0 nova_compute[189493]:        <nova:disk>1</nova:disk>
Dec  9 10:57:37 compute-0 nova_compute[189493]:        <nova:swap>0</nova:swap>
Dec  9 10:57:37 compute-0 nova_compute[189493]:        <nova:ephemeral>1</nova:ephemeral>
Dec  9 10:57:37 compute-0 nova_compute[189493]:        <nova:vcpus>1</nova:vcpus>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      </nova:flavor>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <nova:owner>
Dec  9 10:57:37 compute-0 nova_compute[189493]:        <nova:user uuid="e6d3a937c2a74eb0816d9f63820935e0">admin</nova:user>
Dec  9 10:57:37 compute-0 nova_compute[189493]:        <nova:project uuid="736bbfddbeea47e3ac9d863ba120b8f2">admin</nova:project>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      </nova:owner>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <nova:root type="image" uuid="53d12211-5d5c-4333-b3ee-e3dcf1663767"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <nova:ports>
Dec  9 10:57:37 compute-0 nova_compute[189493]:        <nova:port uuid="b903bb84-e176-4730-b223-613a9b01712b">
Dec  9 10:57:37 compute-0 nova_compute[189493]:          <nova:ip type="fixed" address="192.168.0.92" ipVersion="4"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:        </nova:port>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      </nova:ports>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    </nova:instance>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  </metadata>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  <sysinfo type="smbios">
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <system>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <entry name="manufacturer">RDO</entry>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <entry name="product">OpenStack Compute</entry>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <entry name="serial">7b43ca09-ed65-4465-9fcc-95caa6dc9a88</entry>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <entry name="uuid">7b43ca09-ed65-4465-9fcc-95caa6dc9a88</entry>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <entry name="family">Virtual Machine</entry>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    </system>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  </sysinfo>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  <os>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <boot dev="hd"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <smbios mode="sysinfo"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  </os>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  <features>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <acpi/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <apic/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <vmcoreinfo/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  </features>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  <clock offset="utc">
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <timer name="pit" tickpolicy="delay"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <timer name="hpet" present="no"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  </clock>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  <cpu mode="host-model" match="exact">
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <topology sockets="1" cores="1" threads="1"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  </cpu>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  <devices>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <disk type="file" device="disk">
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <target dev="vda" bus="virtio"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <disk type="file" device="disk">
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <target dev="vdb" bus="virtio"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <disk type="file" device="cdrom">
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <driver name="qemu" type="raw" cache="none"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.config"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <target dev="sda" bus="sata"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    </disk>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <interface type="ethernet">
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <mac address="fa:16:3e:91:d3:f4"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <model type="virtio"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <driver name="vhost" rx_queue_size="512"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <mtu size="1442"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <target dev="tapb903bb84-e1"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    </interface>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <serial type="pty">
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <log file="/var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/console.log" append="off"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    </serial>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <video>
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <model type="virtio"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    </video>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <input type="tablet" bus="usb"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <rng model="virtio">
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <backend model="random">/dev/urandom</backend>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    </rng>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <controller type="usb" index="0"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    <memballoon model="virtio">
Dec  9 10:57:37 compute-0 nova_compute[189493]:      <stats period="10"/>
Dec  9 10:57:37 compute-0 nova_compute[189493]:    </memballoon>
Dec  9 10:57:37 compute-0 nova_compute[189493]:  </devices>
Dec  9 10:57:37 compute-0 nova_compute[189493]: </domain>
Dec  9 10:57:37 compute-0 nova_compute[189493]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.914 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Preparing to wait for external event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.915 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.915 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.915 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.916 189497 DEBUG nova.virt.libvirt.vif [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2025-12-09T10:57:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',id=4,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-d2fjtx7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2025-12-09T10:57:35Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTA0MDE3NDY2MzAyNTc2ODM2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJ
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.916 189497 DEBUG nova.network.os_vif_util [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.917 189497 DEBUG nova.network.os_vif_util [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.918 189497 DEBUG os_vif [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.919 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.919 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.920 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.930 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.931 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb903bb84-e1, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.932 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapb903bb84-e1, col_values=(('external_ids', {'iface-id': 'b903bb84-e176-4730-b223-613a9b01712b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:91:d3:f4', 'vm-uuid': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:57:37 compute-0 NetworkManager[56302]: <info>  [1765277857.9369] manager: (tapb903bb84-e1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.937 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:37 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 10:57:37.883 189497 DEBUG nova.virt.libvirt.vif [None req-3a588a98-b0 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.941 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.948 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:37 compute-0 nova_compute[189493]: 2025-12-09 10:57:37.950 189497 INFO os_vif [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1')#033[00m
Dec  9 10:57:37 compute-0 podman[243740]: 2025-12-09 10:57:37.986476737 +0000 UTC m=+0.125543359 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  9 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.007 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.007 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.007 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.008 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No VIF found with MAC fa:16:3e:91:d3:f4, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092#033[00m
Dec  9 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.008 189497 INFO nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Using config drive#033[00m
Dec  9 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.912 189497 INFO nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Creating config drive at /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.config#033[00m
Dec  9 10:57:38 compute-0 nova_compute[189493]: 2025-12-09 10:57:38.920 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaadafgmu execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.068 189497 DEBUG oslo_concurrency.processutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpaadafgmu" returned: 0 in 0.148s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:57:39 compute-0 kernel: tapb903bb84-e1: entered promiscuous mode
Dec  9 10:57:39 compute-0 NetworkManager[56302]: <info>  [1765277859.2110] manager: (tapb903bb84-e1): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Dec  9 10:57:39 compute-0 ovn_controller[97780]: 2025-12-09T10:57:39Z|00045|binding|INFO|Claiming lport b903bb84-e176-4730-b223-613a9b01712b for this chassis.
Dec  9 10:57:39 compute-0 ovn_controller[97780]: 2025-12-09T10:57:39Z|00046|binding|INFO|b903bb84-e176-4730-b223-613a9b01712b: Claiming fa:16:3e:91:d3:f4 192.168.0.92
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.218 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:39 compute-0 ovn_controller[97780]: 2025-12-09T10:57:39Z|00047|binding|INFO|Setting lport b903bb84-e176-4730-b223-613a9b01712b ovn-installed in OVS
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.242 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:d3:f4 192.168.0.92'], port_security=['fa:16:3e:91:d3:f4 192.168.0.92'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-port-rb2sbixhbgrm', 'neutron:cidrs': '192.168.0.92/24', 'neutron:device_id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-port-rb2sbixhbgrm', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=b903bb84-e176-4730-b223-613a9b01712b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.244 106644 INFO neutron.agent.ovn.metadata.agent [-] Port b903bb84-e176-4730-b223-613a9b01712b in datapath c5af7354-5afe-400a-9e13-5500648117d8 bound to our chassis#033[00m
Dec  9 10:57:39 compute-0 ovn_controller[97780]: 2025-12-09T10:57:39Z|00048|binding|INFO|Setting lport b903bb84-e176-4730-b223-613a9b01712b up in Southbound
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.248 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.254 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:39 compute-0 systemd-machined[155790]: New machine qemu-4-instance-00000004.
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.276 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[0ffcae58-3a1c-4cb4-a3f9-89c87ff7efdd]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:57:39 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Dec  9 10:57:39 compute-0 systemd-udevd[243785]: Network interface NamePolicy= disabled on kernel command line.
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.321 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[54ebd18d-47c5-412b-9170-64cdbfec9742]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.325 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[759074cb-335f-4631-a49e-cfecf455c9ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:57:39 compute-0 NetworkManager[56302]: <info>  [1765277859.3342] device (tapb903bb84-e1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Dec  9 10:57:39 compute-0 NetworkManager[56302]: <info>  [1765277859.3400] device (tapb903bb84-e1): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.361 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[f4121cb2-1723-42e3-adba-018ae09f6519]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.384 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[7a425e70-655f-453d-8b94-113675a63406]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 10, 'tx_packets': 9, 'rx_bytes': 700, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 18085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243793, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.403 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[6ec79a0b-c729-4b01-8619-175c3e0c8457]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243795, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243795, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.405 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.407 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.409 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.408 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.409 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.409 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 10:57:39 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:57:39.410 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.465 189497 DEBUG nova.compute.manager [req-893807c6-76fe-4916-ae24-6488a2e08053 req-50727518-d0a3-41ab-927a-dd58f5ac43dc 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.468 189497 DEBUG oslo_concurrency.lockutils [req-893807c6-76fe-4916-ae24-6488a2e08053 req-50727518-d0a3-41ab-927a-dd58f5ac43dc 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.470 189497 DEBUG oslo_concurrency.lockutils [req-893807c6-76fe-4916-ae24-6488a2e08053 req-50727518-d0a3-41ab-927a-dd58f5ac43dc 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.471 189497 DEBUG oslo_concurrency.lockutils [req-893807c6-76fe-4916-ae24-6488a2e08053 req-50727518-d0a3-41ab-927a-dd58f5ac43dc 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.473 189497 DEBUG nova.compute.manager [req-893807c6-76fe-4916-ae24-6488a2e08053 req-50727518-d0a3-41ab-927a-dd58f5ac43dc 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Processing event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.659 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277859.6588347, 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.660 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] VM Started (Lifecycle Event)#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.664 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.672 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.679 189497 INFO nova.virt.libvirt.driver [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Instance spawned successfully.#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.679 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.686 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.692 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.723 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.725 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.727 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.729 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.730 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.732 189497 DEBUG nova.virt.libvirt.driver [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.736 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.737 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277859.6591125, 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.738 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] VM Paused (Lifecycle Event)#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.808 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.815 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765277859.6729915, 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.816 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] VM Resumed (Lifecycle Event)#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.831 189497 INFO nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Took 3.76 seconds to spawn the instance on the hypervisor.#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.832 189497 DEBUG nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.851 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.862 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.917 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.933 189497 INFO nova.compute.manager [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Took 4.58 seconds to build instance.#033[00m
Dec  9 10:57:39 compute-0 nova_compute[189493]: 2025-12-09 10:57:39.952 189497 DEBUG oslo_concurrency.lockutils [None req-3a588a98-b06c-468e-8ea4-854b019a066d e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 4.714s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:40 compute-0 nova_compute[189493]: 2025-12-09 10:57:40.007 189497 DEBUG nova.network.neutron [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updated VIF entry in instance network info cache for port b903bb84-e176-4730-b223-613a9b01712b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  9 10:57:40 compute-0 nova_compute[189493]: 2025-12-09 10:57:40.009 189497 DEBUG nova.network.neutron [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:57:40 compute-0 nova_compute[189493]: 2025-12-09 10:57:40.032 189497 DEBUG oslo_concurrency.lockutils [req-391b6854-44ae-4c9d-9195-d6392c4418b7 req-29886022-1ad6-43ef-88d0-222c80e47024 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:57:41 compute-0 nova_compute[189493]: 2025-12-09 10:57:41.852 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:57:41 compute-0 nova_compute[189493]: 2025-12-09 10:57:41.853 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:57:41 compute-0 podman[243803]: 2025-12-09 10:57:41.979957985 +0000 UTC m=+0.110691554 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.310 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.527 189497 DEBUG nova.compute.manager [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.528 189497 DEBUG oslo_concurrency.lockutils [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.529 189497 DEBUG oslo_concurrency.lockutils [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.529 189497 DEBUG oslo_concurrency.lockutils [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.530 189497 DEBUG nova.compute.manager [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] No waiting events found dispatching network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.531 189497 WARNING nova.compute.manager [req-de3688bc-ca57-4ec2-8060-cd284db802a8 req-d9e3ad28-b4ea-496c-a305-f20fbcf230eb 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received unexpected event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b for instance with vm_state active and task_state None.#033[00m
Dec  9 10:57:42 compute-0 nova_compute[189493]: 2025-12-09 10:57:42.937 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:45 compute-0 podman[243827]: 2025-12-09 10:57:45.935335281 +0000 UTC m=+0.088044473 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, distribution-scope=public, name=ubi9, release=1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, managed_by=edpm_ansible, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  9 10:57:45 compute-0 podman[243828]: 2025-12-09 10:57:45.958704842 +0000 UTC m=+0.096874247 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible)
Dec  9 10:57:47 compute-0 nova_compute[189493]: 2025-12-09 10:57:47.313 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:47 compute-0 nova_compute[189493]: 2025-12-09 10:57:47.940 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:49 compute-0 podman[243868]: 2025-12-09 10:57:49.985002532 +0000 UTC m=+0.116715434 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  9 10:57:50 compute-0 podman[243867]: 2025-12-09 10:57:50.010614353 +0000 UTC m=+0.146268100 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Dec  9 10:57:52 compute-0 nova_compute[189493]: 2025-12-09 10:57:52.315 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:52 compute-0 nova_compute[189493]: 2025-12-09 10:57:52.942 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:55 compute-0 podman[243904]: 2025-12-09 10:57:55.931946684 +0000 UTC m=+0.086102381 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, distribution-scope=public, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64)
Dec  9 10:57:57 compute-0 nova_compute[189493]: 2025-12-09 10:57:57.317 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:57 compute-0 podman[243926]: 2025-12-09 10:57:57.868355523 +0000 UTC m=+0.125541518 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Dec  9 10:57:57 compute-0 nova_compute[189493]: 2025-12-09 10:57:57.944 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:57:59 compute-0 podman[203687]: time="2025-12-09T10:57:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:57:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:57:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:57:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:57:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Dec  9 10:57:59 compute-0 podman[243955]: 2025-12-09 10:57:59.935573341 +0000 UTC m=+0.078941949 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 10:58:01 compute-0 openstack_network_exporter[205823]: ERROR   10:58:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:58:01 compute-0 openstack_network_exporter[205823]: ERROR   10:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:58:01 compute-0 openstack_network_exporter[205823]: ERROR   10:58:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:58:01 compute-0 openstack_network_exporter[205823]: ERROR   10:58:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:58:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:58:01 compute-0 openstack_network_exporter[205823]: ERROR   10:58:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:58:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:58:02 compute-0 nova_compute[189493]: 2025-12-09 10:58:02.319 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:02 compute-0 nova_compute[189493]: 2025-12-09 10:58:02.948 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:07 compute-0 nova_compute[189493]: 2025-12-09 10:58:07.322 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:07 compute-0 nova_compute[189493]: 2025-12-09 10:58:07.952 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:09 compute-0 podman[243977]: 2025-12-09 10:58:09.014311283 +0000 UTC m=+0.149262649 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']})
Dec  9 10:58:09 compute-0 ovn_controller[97780]: 2025-12-09T10:58:09Z|00049|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Dec  9 10:58:12 compute-0 nova_compute[189493]: 2025-12-09 10:58:12.324 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:12 compute-0 nova_compute[189493]: 2025-12-09 10:58:12.954 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:12 compute-0 ovn_controller[97780]: 2025-12-09T10:58:12Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:91:d3:f4 192.168.0.92
Dec  9 10:58:12 compute-0 podman[244006]: 2025-12-09 10:58:12.986989558 +0000 UTC m=+0.125540579 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 10:58:12 compute-0 ovn_controller[97780]: 2025-12-09T10:58:12Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:91:d3:f4 192.168.0.92
Dec  9 10:58:16 compute-0 podman[244030]: 2025-12-09 10:58:16.978481033 +0000 UTC m=+0.122757485 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, distribution-scope=public, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=)
Dec  9 10:58:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:58:16.990 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:58:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:58:16.990 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:58:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:58:16.991 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:58:17 compute-0 podman[244031]: 2025-12-09 10:58:17.031832711 +0000 UTC m=+0.165809089 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Dec  9 10:58:17 compute-0 nova_compute[189493]: 2025-12-09 10:58:17.326 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:17 compute-0 nova_compute[189493]: 2025-12-09 10:58:17.957 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:20 compute-0 podman[244070]: 2025-12-09 10:58:20.925571008 +0000 UTC m=+0.071747159 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  9 10:58:20 compute-0 podman[244071]: 2025-12-09 10:58:20.990461603 +0000 UTC m=+0.127842010 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  9 10:58:22 compute-0 nova_compute[189493]: 2025-12-09 10:58:22.329 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:22 compute-0 nova_compute[189493]: 2025-12-09 10:58:22.961 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.292 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.294 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.294 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.309 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'name': 'vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.315 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.319 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Dec  9 10:58:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:23.322 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/7b43ca09-ed65-4465-9fcc-95caa6dc9a88 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}c39d506960fbc5044d0bc54d9594567a78a3d14170701e46780a30eef7979125" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.552 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Tue, 09 Dec 2025 10:58:23 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-7693d82e-f672-4678-a1b8-dd79f86fe1ad x-openstack-request-id: req-7693d82e-f672-4678-a1b8-dd79f86fe1ad _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.553 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "7b43ca09-ed65-4465-9fcc-95caa6dc9a88", "name": "vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq", "status": "ACTIVE", "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "user_id": "e6d3a937c2a74eb0816d9f63820935e0", "metadata": {"metering.server_group": "24f6e5b2-dd43-46f1-87a4-e2efc1300914"}, "hostId": "17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee", "image": {"id": "53d12211-5d5c-4333-b3ee-e3dcf1663767", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/53d12211-5d5c-4333-b3ee-e3dcf1663767"}]}, "flavor": {"id": "cf91b364-8467-4d1e-8c92-f7d1fab99905", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/cf91b364-8467-4d1e-8c92-f7d1fab99905"}]}, "created": "2025-12-09T10:57:33Z", "updated": "2025-12-09T10:57:39Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.92", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:91:d3:f4"}, {"version": 4, "addr": "192.168.122.176", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:91:d3:f4"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/7b43ca09-ed65-4465-9fcc-95caa6dc9a88"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/7b43ca09-ed65-4465-9fcc-95caa6dc9a88"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2025-12-09T10:57:39.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.553 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/7b43ca09-ed65-4465-9fcc-95caa6dc9a88 used request id req-7693d82e-f672-4678-a1b8-dd79f86fe1ad request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.555 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'name': 'vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.561 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.561 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.562 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.562 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.562 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.564 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T10:58:24.562530) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.568 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.576 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.580 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 / tapb903bb84-e1 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.581 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.587 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T10:58:24.589705) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.590 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.630 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.631 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.631 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.666 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.666 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.667 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.700 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.701 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.701 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.732 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.733 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.733 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.735 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 65 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.736 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T10:58:24.735354) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.737 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.738 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.738 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.738 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.738 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.738 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.739 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.739 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.739 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.740 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.740 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.740 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.741 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T10:58:24.738404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.741 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.741 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.741 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.742 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T10:58:24.741541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.821 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.822 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.822 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.887 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.887 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.887 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.963 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.964 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:24.965 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.034 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.034 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.034 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.035 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.036 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T10:58:25.035807) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.036 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.036 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 386883662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.037 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 91523197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.038 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 560654086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T10:58:25.037389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.038 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 439593872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.038 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 92612690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.038 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 59905939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.039 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 492966519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.039 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 88653492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.039 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 59040938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.039 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.040 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.040 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T10:58:25.041230) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.041 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.042 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.042 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.042 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.042 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.042 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.043 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.043 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.043 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.043 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.044 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.045 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.045 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.045 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.045 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.045 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.046 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T10:58:25.044675) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.046 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.046 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.047 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.047 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.047 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.048 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.049 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T10:58:25.048577) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.071 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/cpu volume: 31480000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.096 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 388560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.126 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/cpu volume: 32640000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.152 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 41680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.153 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T10:58:25.153543) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.154 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.155 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.155 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.155 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.155 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.155 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.156 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T10:58:25.156963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.157 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.158 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.158 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.158 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.158 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.158 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.159 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.159 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 1670377851 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 9651641 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T10:58:25.160260) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.160 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.161 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 2122486229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.161 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 13222286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.161 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.161 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 2203978842 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.161 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 10632793 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.162 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.162 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.162 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.162 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.163 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.164 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.164 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T10:58:25.163404) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.165 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes volume: 2258 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.166 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 7478 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.166 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes volume: 1751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.166 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.166 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T10:58:25.165794) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.167 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T10:58:25.167412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.168 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.169 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.169 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.169 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.169 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.169 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T10:58:25.170726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.170 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes.delta volume: 126 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.171 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.171 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.171 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.171 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.172 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T10:58:25.172317) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.173 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq>]
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2025-12-09T10:58:25.173409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T10:58:25.174368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.175 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.176 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T10:58:25.175459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.177 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.178 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T10:58:25.177086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.178 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.178 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.180 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes.delta volume: 507 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.181 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 2474 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T10:58:25.179136) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.181 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.181 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.181 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T10:58:25.180877) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: 48.98046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.183 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/memory.usage volume: 49.72265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.183 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.182 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T10:58:25.182415) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.183 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2025-12-09T10:58:25.184235) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq>]
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.184 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.185 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.186 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:25 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 10:58:25.187 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 10:58:26 compute-0 podman[244108]: 2025-12-09 10:58:26.94099891 +0000 UTC m=+0.086039118 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., config_id=edpm, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=)
Dec  9 10:58:27 compute-0 nova_compute[189493]: 2025-12-09 10:58:27.332 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:27 compute-0 nova_compute[189493]: 2025-12-09 10:58:27.964 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:29 compute-0 podman[244130]: 2025-12-09 10:58:29.02219508 +0000 UTC m=+0.163159339 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 10:58:29 compute-0 podman[203687]: time="2025-12-09T10:58:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:58:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:58:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:58:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:58:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec  9 10:58:29 compute-0 nova_compute[189493]: 2025-12-09 10:58:29.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:58:30 compute-0 podman[244155]: 2025-12-09 10:58:30.957313586 +0000 UTC m=+0.106272346 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 10:58:31 compute-0 openstack_network_exporter[205823]: ERROR   10:58:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:58:31 compute-0 openstack_network_exporter[205823]: ERROR   10:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:58:31 compute-0 openstack_network_exporter[205823]: ERROR   10:58:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:58:31 compute-0 openstack_network_exporter[205823]: ERROR   10:58:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:58:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:58:31 compute-0 openstack_network_exporter[205823]: ERROR   10:58:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:58:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:58:31 compute-0 nova_compute[189493]: 2025-12-09 10:58:31.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:58:31 compute-0 nova_compute[189493]: 2025-12-09 10:58:31.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:58:32 compute-0 nova_compute[189493]: 2025-12-09 10:58:32.336 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:32 compute-0 nova_compute[189493]: 2025-12-09 10:58:32.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:58:32 compute-0 nova_compute[189493]: 2025-12-09 10:58:32.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:58:32 compute-0 nova_compute[189493]: 2025-12-09 10:58:32.969 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:34 compute-0 nova_compute[189493]: 2025-12-09 10:58:34.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:58:34 compute-0 nova_compute[189493]: 2025-12-09 10:58:34.845 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:58:35 compute-0 nova_compute[189493]: 2025-12-09 10:58:35.425 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:58:35 compute-0 nova_compute[189493]: 2025-12-09 10:58:35.426 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:58:35 compute-0 nova_compute[189493]: 2025-12-09 10:58:35.426 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.781 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.803 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.803 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.803 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.803 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.834 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.835 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.835 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.836 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:58:36 compute-0 nova_compute[189493]: 2025-12-09 10:58:36.979 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.065 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.076 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.169 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.169 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.253 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.254 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.331 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.340 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.345 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.404 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.406 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.466 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.467 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.524 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.525 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.598 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.618 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.691 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.693 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.789 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.792 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.908 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.117s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.911 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.974 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:37 compute-0 nova_compute[189493]: 2025-12-09 10:58:37.992 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.002 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.087 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.088 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.150 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.151 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.211 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.215 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.279 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.711 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.713 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4612MB free_disk=72.11589431762695GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.713 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.714 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.995 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.995 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.996 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.996 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:58:38 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.996 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:58:39 compute-0 nova_compute[189493]: 2025-12-09 10:58:38.997 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:58:39 compute-0 nova_compute[189493]: 2025-12-09 10:58:39.126 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:58:39 compute-0 nova_compute[189493]: 2025-12-09 10:58:39.146 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:58:39 compute-0 nova_compute[189493]: 2025-12-09 10:58:39.245 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:58:39 compute-0 nova_compute[189493]: 2025-12-09 10:58:39.247 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:58:39 compute-0 podman[244229]: 2025-12-09 10:58:39.971680586 +0000 UTC m=+0.109167473 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, org.label-schema.build-date=20251202, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  9 10:58:42 compute-0 nova_compute[189493]: 2025-12-09 10:58:42.342 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:42 compute-0 nova_compute[189493]: 2025-12-09 10:58:42.977 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:43 compute-0 nova_compute[189493]: 2025-12-09 10:58:43.288 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:58:43 compute-0 nova_compute[189493]: 2025-12-09 10:58:43.289 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:58:43 compute-0 podman[244248]: 2025-12-09 10:58:43.971385769 +0000 UTC m=+0.112146172 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 10:58:47 compute-0 nova_compute[189493]: 2025-12-09 10:58:47.345 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:47 compute-0 nova_compute[189493]: 2025-12-09 10:58:47.980 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:48 compute-0 podman[244272]: 2025-12-09 10:58:48.00108253 +0000 UTC m=+0.131854466 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 10:58:48 compute-0 podman[244271]: 2025-12-09 10:58:48.010388067 +0000 UTC m=+0.146023613 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, version=9.4, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc.)
Dec  9 10:58:51 compute-0 podman[244311]: 2025-12-09 10:58:51.93328937 +0000 UTC m=+0.079096204 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Dec  9 10:58:51 compute-0 podman[244312]: 2025-12-09 10:58:51.943503701 +0000 UTC m=+0.090538698 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  9 10:58:52 compute-0 nova_compute[189493]: 2025-12-09 10:58:52.348 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:52 compute-0 nova_compute[189493]: 2025-12-09 10:58:52.984 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:57 compute-0 nova_compute[189493]: 2025-12-09 10:58:57.350 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:57 compute-0 podman[244350]: 2025-12-09 10:58:57.958981905 +0000 UTC m=+0.106737969 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, version=9.6, maintainer=Red Hat, Inc., name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  9 10:58:57 compute-0 nova_compute[189493]: 2025-12-09 10:58:57.987 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:58:59 compute-0 podman[203687]: time="2025-12-09T10:58:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:58:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:58:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:58:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:58:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec  9 10:59:00 compute-0 podman[244371]: 2025-12-09 10:59:00.04530757 +0000 UTC m=+0.178645299 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 10:59:01 compute-0 openstack_network_exporter[205823]: ERROR   10:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:59:01 compute-0 openstack_network_exporter[205823]: ERROR   10:59:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:59:01 compute-0 openstack_network_exporter[205823]: ERROR   10:59:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:59:01 compute-0 openstack_network_exporter[205823]: ERROR   10:59:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:59:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:59:01 compute-0 openstack_network_exporter[205823]: ERROR   10:59:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:59:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:59:01 compute-0 podman[244396]: 2025-12-09 10:59:01.985177294 +0000 UTC m=+0.133276945 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 10:59:02 compute-0 nova_compute[189493]: 2025-12-09 10:59:02.353 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:02 compute-0 nova_compute[189493]: 2025-12-09 10:59:02.991 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:07 compute-0 nova_compute[189493]: 2025-12-09 10:59:07.356 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:07 compute-0 nova_compute[189493]: 2025-12-09 10:59:07.995 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:11 compute-0 podman[244420]: 2025-12-09 10:59:11.00006612 +0000 UTC m=+0.135829953 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec  9 10:59:12 compute-0 nova_compute[189493]: 2025-12-09 10:59:12.359 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:12 compute-0 nova_compute[189493]: 2025-12-09 10:59:12.998 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:14 compute-0 podman[244439]: 2025-12-09 10:59:14.786819782 +0000 UTC m=+0.089326766 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 10:59:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:59:16.991 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:59:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:59:16.991 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:59:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 10:59:16.992 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:59:17 compute-0 nova_compute[189493]: 2025-12-09 10:59:17.361 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:18 compute-0 nova_compute[189493]: 2025-12-09 10:59:18.002 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:18 compute-0 podman[244463]: 2025-12-09 10:59:18.926403114 +0000 UTC m=+0.079828133 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, version=9.4, com.redhat.component=ubi9-container, vcs-type=git, build-date=2024-09-18T21:23:30, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, container_name=kepler, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Dec  9 10:59:18 compute-0 podman[244464]: 2025-12-09 10:59:18.946325963 +0000 UTC m=+0.081960509 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  9 10:59:22 compute-0 nova_compute[189493]: 2025-12-09 10:59:22.364 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:22 compute-0 podman[244501]: 2025-12-09 10:59:22.951260247 +0000 UTC m=+0.095031458 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true)
Dec  9 10:59:22 compute-0 podman[244502]: 2025-12-09 10:59:22.974664358 +0000 UTC m=+0.112402829 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Dec  9 10:59:23 compute-0 nova_compute[189493]: 2025-12-09 10:59:23.005 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:27 compute-0 nova_compute[189493]: 2025-12-09 10:59:27.367 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:28 compute-0 nova_compute[189493]: 2025-12-09 10:59:28.009 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:28 compute-0 podman[244540]: 2025-12-09 10:59:28.97500678 +0000 UTC m=+0.116018305 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, config_id=edpm, distribution-scope=public, vcs-type=git, io.buildah.version=1.33.7, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible)
Dec  9 10:59:29 compute-0 podman[203687]: time="2025-12-09T10:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:59:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:59:29 compute-0 podman[203687]: @ - - [09/Dec/2025:10:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec  9 10:59:30 compute-0 nova_compute[189493]: 2025-12-09 10:59:30.844 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:31 compute-0 podman[244561]: 2025-12-09 10:59:31.022928914 +0000 UTC m=+0.160462096 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 10:59:31 compute-0 openstack_network_exporter[205823]: ERROR   10:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:59:31 compute-0 openstack_network_exporter[205823]: ERROR   10:59:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 10:59:31 compute-0 openstack_network_exporter[205823]: ERROR   10:59:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 10:59:31 compute-0 openstack_network_exporter[205823]: ERROR   10:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 10:59:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:59:31 compute-0 openstack_network_exporter[205823]: ERROR   10:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 10:59:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 10:59:31 compute-0 nova_compute[189493]: 2025-12-09 10:59:31.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:32 compute-0 nova_compute[189493]: 2025-12-09 10:59:32.368 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:32 compute-0 nova_compute[189493]: 2025-12-09 10:59:32.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:32 compute-0 nova_compute[189493]: 2025-12-09 10:59:32.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:32 compute-0 podman[244586]: 2025-12-09 10:59:32.994018407 +0000 UTC m=+0.127095570 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 10:59:33 compute-0 nova_compute[189493]: 2025-12-09 10:59:33.012 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:33 compute-0 nova_compute[189493]: 2025-12-09 10:59:33.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:35 compute-0 nova_compute[189493]: 2025-12-09 10:59:35.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:35 compute-0 nova_compute[189493]: 2025-12-09 10:59:35.871 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:36 compute-0 nova_compute[189493]: 2025-12-09 10:59:36.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:36 compute-0 nova_compute[189493]: 2025-12-09 10:59:36.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 10:59:36 compute-0 nova_compute[189493]: 2025-12-09 10:59:36.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 10:59:37 compute-0 nova_compute[189493]: 2025-12-09 10:59:37.370 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:37 compute-0 nova_compute[189493]: 2025-12-09 10:59:37.466 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 10:59:37 compute-0 nova_compute[189493]: 2025-12-09 10:59:37.467 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 10:59:37 compute-0 nova_compute[189493]: 2025-12-09 10:59:37.467 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 10:59:37 compute-0 nova_compute[189493]: 2025-12-09 10:59:37.468 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 10:59:38 compute-0 nova_compute[189493]: 2025-12-09 10:59:38.014 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.519 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.534 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.535 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.535 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.576 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.577 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.577 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.578 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.713 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.802 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.803 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.859 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.860 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.962 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:39 compute-0 nova_compute[189493]: 2025-12-09 10:59:39.963 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.055 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.066 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.160 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.161 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.220 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.222 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.295 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.296 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.359 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.366 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.434 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.435 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.498 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.499 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.579 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.580 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.637 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.646 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.704 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.707 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.772 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.774 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.834 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.835 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 10:59:40 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  9 10:59:40 compute-0 nova_compute[189493]: 2025-12-09 10:59:40.925 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.390 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.391 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4622MB free_disk=72.11589431762695GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.392 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.392 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.601 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.601 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.603 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.603 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.603 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.603 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.679 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.757 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.758 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.784 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  9 10:59:41 compute-0 nova_compute[189493]: 2025-12-09 10:59:41.807 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  9 10:59:41 compute-0 podman[244657]: 2025-12-09 10:59:41.955582564 +0000 UTC m=+0.093538348 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.116 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.138 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.143 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.143 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.144 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:42 compute-0 nova_compute[189493]: 2025-12-09 10:59:42.372 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:43 compute-0 nova_compute[189493]: 2025-12-09 10:59:43.017 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:43 compute-0 nova_compute[189493]: 2025-12-09 10:59:43.466 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:43 compute-0 nova_compute[189493]: 2025-12-09 10:59:43.467 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 10:59:43 compute-0 nova_compute[189493]: 2025-12-09 10:59:43.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:43 compute-0 nova_compute[189493]: 2025-12-09 10:59:43.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  9 10:59:44 compute-0 podman[244676]: 2025-12-09 10:59:44.943065637 +0000 UTC m=+0.095657704 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 10:59:45 compute-0 nova_compute[189493]: 2025-12-09 10:59:45.875 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:45 compute-0 nova_compute[189493]: 2025-12-09 10:59:45.875 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  9 10:59:45 compute-0 nova_compute[189493]: 2025-12-09 10:59:45.888 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  9 10:59:47 compute-0 nova_compute[189493]: 2025-12-09 10:59:47.376 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:48 compute-0 nova_compute[189493]: 2025-12-09 10:59:48.020 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.508 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.535 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Triggering sync for uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.535 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Triggering sync for uuid 1bddf2bf-8932-4428-97d7-7342a7ec414b _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.535 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Triggering sync for uuid 32dd7fb0-7003-48cc-b688-4b94946c911f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.536 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Triggering sync for uuid 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.536 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.537 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.537 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.537 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.538 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.538 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.538 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.539 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.576 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.039s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.582 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.045s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.637 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.099s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:59:49 compute-0 nova_compute[189493]: 2025-12-09 10:59:49.642 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.104s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 10:59:49 compute-0 podman[244698]: 2025-12-09 10:59:49.950186334 +0000 UTC m=+0.090725503 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vendor=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., container_name=kepler, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, version=9.4, config_id=edpm, io.buildah.version=1.29.0, name=ubi9, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9)
Dec  9 10:59:49 compute-0 podman[244699]: 2025-12-09 10:59:49.972448826 +0000 UTC m=+0.109032149 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Dec  9 10:59:52 compute-0 nova_compute[189493]: 2025-12-09 10:59:52.379 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:53 compute-0 nova_compute[189493]: 2025-12-09 10:59:53.024 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:53 compute-0 podman[244734]: 2025-12-09 10:59:53.966380396 +0000 UTC m=+0.103509553 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 10:59:53 compute-0 podman[244736]: 2025-12-09 10:59:53.976398713 +0000 UTC m=+0.116402366 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  9 10:59:57 compute-0 nova_compute[189493]: 2025-12-09 10:59:57.382 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:58 compute-0 nova_compute[189493]: 2025-12-09 10:59:58.029 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 10:59:59 compute-0 podman[203687]: time="2025-12-09T10:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 10:59:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 10:59:59 compute-0 podman[203687]: @ - - [09/Dec/2025:10:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  9 10:59:59 compute-0 podman[244773]: 2025-12-09 10:59:59.96719005 +0000 UTC m=+0.112607734 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_id=edpm, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=)
Dec  9 11:00:01 compute-0 openstack_network_exporter[205823]: ERROR   11:00:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:00:01 compute-0 openstack_network_exporter[205823]: ERROR   11:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:00:01 compute-0 openstack_network_exporter[205823]: ERROR   11:00:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:00:01 compute-0 openstack_network_exporter[205823]: ERROR   11:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:00:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:00:01 compute-0 openstack_network_exporter[205823]: ERROR   11:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:00:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:00:01 compute-0 podman[244791]: 2025-12-09 11:00:01.996004648 +0000 UTC m=+0.148734926 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec  9 11:00:02 compute-0 nova_compute[189493]: 2025-12-09 11:00:02.386 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:03 compute-0 nova_compute[189493]: 2025-12-09 11:00:03.033 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:03 compute-0 podman[244816]: 2025-12-09 11:00:03.935993062 +0000 UTC m=+0.083791738 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 11:00:07 compute-0 nova_compute[189493]: 2025-12-09 11:00:07.389 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:08 compute-0 nova_compute[189493]: 2025-12-09 11:00:08.037 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:12 compute-0 nova_compute[189493]: 2025-12-09 11:00:12.394 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:12 compute-0 podman[244839]: 2025-12-09 11:00:12.980638548 +0000 UTC m=+0.128683812 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec  9 11:00:13 compute-0 nova_compute[189493]: 2025-12-09 11:00:13.042 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:15 compute-0 podman[244858]: 2025-12-09 11:00:15.915181734 +0000 UTC m=+0.065866522 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 11:00:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:00:16.992 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:00:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:00:16.992 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:00:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:00:16.993 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:00:17 compute-0 nova_compute[189493]: 2025-12-09 11:00:17.395 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:18 compute-0 nova_compute[189493]: 2025-12-09 11:00:18.045 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:20 compute-0 podman[244887]: 2025-12-09 11:00:20.992447016 +0000 UTC m=+0.115367658 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Dec  9 11:00:21 compute-0 podman[244886]: 2025-12-09 11:00:21.007021483 +0000 UTC m=+0.135131223 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.tags=base rhel9, release-0.7.12=, config_id=edpm, name=ubi9, architecture=x86_64, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  9 11:00:22 compute-0 nova_compute[189493]: 2025-12-09 11:00:22.397 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:23 compute-0 nova_compute[189493]: 2025-12-09 11:00:23.048 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.294 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.295 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.296 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.301 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75c36d20>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.311 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'name': 'vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.318 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'name': 'vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.325 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'name': 'vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.331 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.332 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.332 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.333 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.333 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.335 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T11:00:23.333325) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.342 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes volume: 1612 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.348 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes volume: 8406 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.354 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.361 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2178 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.363 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T11:00:23.364528) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.407 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.407 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.408 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.451 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.452 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.453 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.498 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.498 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.498 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.538 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.538 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.538 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.539 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.539 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets volume: 66 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.540 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets volume: 20 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.541 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.541 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.542 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.544 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.544 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.543 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T11:00:23.540289) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T11:00:23.542340) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.545 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T11:00:23.544121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.645 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.646 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.646 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.741 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.742 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.742 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.858 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.859 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.860 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.961 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.962 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.964 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.966 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.967 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.967 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.967 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T11:00:23.968434) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.969 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.970 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.970 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.971 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.971 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.972 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.972 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.972 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.972 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.973 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T11:00:23.973043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.973 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.974 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 386883662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.974 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 91523197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.975 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 560654086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.975 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 439593872 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.976 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 92612690 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.976 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.latency volume: 59905939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.977 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 492966519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.977 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 88653492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.978 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 59040938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.978 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.979 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.979 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.980 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.981 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.981 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.981 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.981 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.982 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.982 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T11:00:23.981831) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.982 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.983 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.983 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.984 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.984 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.985 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.985 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.985 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.986 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.986 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.987 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.987 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.989 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T11:00:23.989864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.990 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.991 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.991 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.992 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.992 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.993 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.993 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.993 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.994 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.994 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.995 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.995 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.997 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.997 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.997 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.997 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T11:00:23.998087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:23.998 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.033 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/cpu volume: 33290000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.067 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/cpu volume: 390330000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.110 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/cpu volume: 34450000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.140 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 43510000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.142 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.143 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.143 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.144 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.144 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.144 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.145 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T11:00:24.144268) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.145 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.146 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.146 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.147 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.147 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.148 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.148 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.148 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.149 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.149 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.149 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.151 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.151 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.151 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.151 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.152 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.152 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.152 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.153 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 41852928 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.153 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.154 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.154 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.155 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.155 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.156 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.156 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.156 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.158 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.159 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T11:00:24.151821) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.159 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 1670377851 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T11:00:24.159197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.159 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 9651641 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.160 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.160 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 2122486229 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.161 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 13222286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.161 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.161 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 2223058984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.162 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 10632793 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.162 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.163 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.163 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.163 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.165 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.166 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.166 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.167 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T11:00:24.165647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.167 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.168 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes volume: 2328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.169 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes volume: 7548 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.169 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes volume: 2216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.170 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.170 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.171 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.172 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.172 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.173 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 239 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.173 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.173 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.174 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.174 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.175 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.175 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.176 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.176 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.178 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.179 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.179 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.180 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.180 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.180 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.181 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.182 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.182 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.182 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.183 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.183 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.183 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.183 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.183 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.184 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.185 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.185 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets volume: 55 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.185 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.185 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.186 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.186 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.186 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.187 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.187 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.187 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.188 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.188 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T11:00:24.168747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.189 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.190 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.190 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T11:00:24.171747) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.192 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.192 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.192 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes.delta volume: 465 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.192 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T11:00:24.179491) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T11:00:24.181659) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.193 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.194 14 DEBUG ceilometer.compute.pollsters [-] 1bddf2bf-8932-4428-97d7-7342a7ec414b/memory.usage volume: 48.97265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.194 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.194 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.194 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.194 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.195 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T11:00:24.183537) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T11:00:24.184959) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T11:00:24.187091) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.197 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T11:00:24.189390) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T11:00:24.191906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:00:24.198 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T11:00:24.193816) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:00:24 compute-0 podman[244926]: 2025-12-09 11:00:24.923237607 +0000 UTC m=+0.071715418 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Dec  9 11:00:24 compute-0 podman[244927]: 2025-12-09 11:00:24.955355371 +0000 UTC m=+0.103639927 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, config_id=edpm)
Dec  9 11:00:27 compute-0 nova_compute[189493]: 2025-12-09 11:00:27.399 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:28 compute-0 nova_compute[189493]: 2025-12-09 11:00:28.051 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:29 compute-0 podman[203687]: time="2025-12-09T11:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:00:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:00:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4802 "" "Go-http-client/1.1"
Dec  9 11:00:30 compute-0 podman[244960]: 2025-12-09 11:00:30.984505449 +0000 UTC m=+0.121843961 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, config_id=edpm, distribution-scope=public, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 11:00:31 compute-0 openstack_network_exporter[205823]: ERROR   11:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:00:31 compute-0 openstack_network_exporter[205823]: ERROR   11:00:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:00:31 compute-0 openstack_network_exporter[205823]: ERROR   11:00:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:00:31 compute-0 openstack_network_exporter[205823]: ERROR   11:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:00:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:00:31 compute-0 openstack_network_exporter[205823]: ERROR   11:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:00:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:00:31 compute-0 nova_compute[189493]: 2025-12-09 11:00:31.872 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:00:32 compute-0 nova_compute[189493]: 2025-12-09 11:00:32.403 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:32 compute-0 nova_compute[189493]: 2025-12-09 11:00:32.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:00:32 compute-0 nova_compute[189493]: 2025-12-09 11:00:32.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:00:32 compute-0 podman[244980]: 2025-12-09 11:00:32.989948644 +0000 UTC m=+0.133113060 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3)
Dec  9 11:00:33 compute-0 nova_compute[189493]: 2025-12-09 11:00:33.054 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:33 compute-0 nova_compute[189493]: 2025-12-09 11:00:33.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:00:34 compute-0 nova_compute[189493]: 2025-12-09 11:00:34.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:00:34 compute-0 podman[245006]: 2025-12-09 11:00:34.950300871 +0000 UTC m=+0.101379666 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:00:36 compute-0 nova_compute[189493]: 2025-12-09 11:00:36.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:00:37 compute-0 nova_compute[189493]: 2025-12-09 11:00:37.407 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:37 compute-0 nova_compute[189493]: 2025-12-09 11:00:37.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:00:37 compute-0 nova_compute[189493]: 2025-12-09 11:00:37.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:00:38 compute-0 nova_compute[189493]: 2025-12-09 11:00:38.058 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:38 compute-0 nova_compute[189493]: 2025-12-09 11:00:38.541 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:00:38 compute-0 nova_compute[189493]: 2025-12-09 11:00:38.542 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:00:38 compute-0 nova_compute[189493]: 2025-12-09 11:00:38.542 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.163 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [{"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.182 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.183 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.185 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.220 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.221 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.222 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.312 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.398 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.399 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.487 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.490 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.556 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.557 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.615 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.621 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.681 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.684 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.753 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.754 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.856 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.858 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.950 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:40 compute-0 nova_compute[189493]: 2025-12-09 11:00:40.962 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.044 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.046 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.161 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.115s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.163 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.257 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.260 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.360 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.373 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.475 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.477 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.538 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.539 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.649 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.652 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:00:41 compute-0 nova_compute[189493]: 2025-12-09 11:00:41.740 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.266 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.268 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4620MB free_disk=72.11589431762695GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.268 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.269 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.409 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.435 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.436 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 1bddf2bf-8932-4428-97d7-7342a7ec414b actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.436 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.436 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.436 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.437 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.560 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.582 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.584 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:00:42 compute-0 nova_compute[189493]: 2025-12-09 11:00:42.585 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:00:43 compute-0 nova_compute[189493]: 2025-12-09 11:00:43.063 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:43 compute-0 podman[245076]: 2025-12-09 11:00:43.954285664 +0000 UTC m=+0.101896709 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  9 11:00:45 compute-0 nova_compute[189493]: 2025-12-09 11:00:45.243 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:00:45 compute-0 nova_compute[189493]: 2025-12-09 11:00:45.243 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:00:46 compute-0 podman[245096]: 2025-12-09 11:00:46.931037323 +0000 UTC m=+0.076660859 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 11:00:47 compute-0 nova_compute[189493]: 2025-12-09 11:00:47.413 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:48 compute-0 nova_compute[189493]: 2025-12-09 11:00:48.065 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:51 compute-0 podman[245123]: 2025-12-09 11:00:51.969227785 +0000 UTC m=+0.096509377 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Dec  9 11:00:51 compute-0 podman[245122]: 2025-12-09 11:00:51.978267626 +0000 UTC m=+0.108755053 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, distribution-scope=public, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, name=ubi9, release=1214.1726694543, architecture=x86_64, io.openshift.expose-services=)
Dec  9 11:00:52 compute-0 nova_compute[189493]: 2025-12-09 11:00:52.416 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:53 compute-0 nova_compute[189493]: 2025-12-09 11:00:53.069 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:55 compute-0 podman[245158]: 2025-12-09 11:00:55.955602733 +0000 UTC m=+0.088647497 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Dec  9 11:00:55 compute-0 podman[245157]: 2025-12-09 11:00:55.961895271 +0000 UTC m=+0.101762527 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  9 11:00:57 compute-0 nova_compute[189493]: 2025-12-09 11:00:57.418 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:58 compute-0 nova_compute[189493]: 2025-12-09 11:00:58.073 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:00:59 compute-0 podman[203687]: time="2025-12-09T11:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:00:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:00:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4801 "" "Go-http-client/1.1"
Dec  9 11:01:01 compute-0 openstack_network_exporter[205823]: ERROR   11:01:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:01:01 compute-0 openstack_network_exporter[205823]: ERROR   11:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:01:01 compute-0 openstack_network_exporter[205823]: ERROR   11:01:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:01:01 compute-0 openstack_network_exporter[205823]: ERROR   11:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:01:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:01:01 compute-0 openstack_network_exporter[205823]: ERROR   11:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:01:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:01:02 compute-0 podman[245205]: 2025-12-09 11:01:02.0031699 +0000 UTC m=+0.134043984 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, config_id=edpm, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  9 11:01:02 compute-0 nova_compute[189493]: 2025-12-09 11:01:02.421 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:03 compute-0 nova_compute[189493]: 2025-12-09 11:01:03.076 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:03 compute-0 podman[245225]: 2025-12-09 11:01:03.978346972 +0000 UTC m=+0.134415905 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  9 11:01:05 compute-0 podman[245250]: 2025-12-09 11:01:05.975144723 +0000 UTC m=+0.119622213 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 11:01:07 compute-0 nova_compute[189493]: 2025-12-09 11:01:07.426 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:08 compute-0 nova_compute[189493]: 2025-12-09 11:01:08.079 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:12 compute-0 nova_compute[189493]: 2025-12-09 11:01:12.428 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:13 compute-0 nova_compute[189493]: 2025-12-09 11:01:13.083 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:14 compute-0 podman[245272]: 2025-12-09 11:01:14.834372949 +0000 UTC m=+0.117189167 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd)
Dec  9 11:01:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:16.993 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:01:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:16.995 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:01:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:16.996 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:01:17 compute-0 nova_compute[189493]: 2025-12-09 11:01:17.431 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:17 compute-0 podman[245292]: 2025-12-09 11:01:17.921003045 +0000 UTC m=+0.077764925 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 11:01:18 compute-0 nova_compute[189493]: 2025-12-09 11:01:18.087 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:22 compute-0 nova_compute[189493]: 2025-12-09 11:01:22.432 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:22 compute-0 podman[245319]: 2025-12-09 11:01:22.973008178 +0000 UTC m=+0.104000986 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, config_id=edpm, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Dec  9 11:01:23 compute-0 podman[245320]: 2025-12-09 11:01:23.000208813 +0000 UTC m=+0.133275077 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:01:23 compute-0 nova_compute[189493]: 2025-12-09 11:01:23.093 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.760 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.761 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.761 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.762 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.762 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.764 189497 INFO nova.compute.manager [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Terminating instance#033[00m
Dec  9 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.766 189497 DEBUG nova.compute.manager [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  9 11:01:24 compute-0 kernel: tap7819acf8-da (unregistering): left promiscuous mode
Dec  9 11:01:24 compute-0 NetworkManager[56302]: <info>  [1765278084.8266] device (tap7819acf8-da): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  9 11:01:24 compute-0 ovn_controller[97780]: 2025-12-09T11:01:24Z|00050|binding|INFO|Releasing lport 7819acf8-daa2-4391-96d4-ef33c260f794 from this chassis (sb_readonly=0)
Dec  9 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.839 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:24 compute-0 ovn_controller[97780]: 2025-12-09T11:01:24Z|00051|binding|INFO|Setting lport 7819acf8-daa2-4391-96d4-ef33c260f794 down in Southbound
Dec  9 11:01:24 compute-0 ovn_controller[97780]: 2025-12-09T11:01:24Z|00052|binding|INFO|Removing iface tap7819acf8-da ovn-installed in OVS
Dec  9 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.846 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.849 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:01:4e:b4 192.168.0.212'], port_security=['fa:16:3e:01:4e:b4 192.168.0.212'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-x2vp5udxgoax-du67okrzyrz6-port-copozzjp5fc5', 'neutron:cidrs': '192.168.0.212/24', 'neutron:device_id': '1bddf2bf-8932-4428-97d7-7342a7ec414b', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-x2vp5udxgoax-du67okrzyrz6-port-copozzjp5fc5', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.172', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=7819acf8-daa2-4391-96d4-ef33c260f794) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.850 106644 INFO neutron.agent.ovn.metadata.agent [-] Port 7819acf8-daa2-4391-96d4-ef33c260f794 in datapath c5af7354-5afe-400a-9e13-5500648117d8 unbound from our chassis#033[00m
Dec  9 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.851 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8#033[00m
Dec  9 11:01:24 compute-0 nova_compute[189493]: 2025-12-09 11:01:24.858 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.872 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[83df7a6c-9dfb-4299-a236-90edb6ab6ad6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.914 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[ac87543e-8d39-412c-93e2-335d69c99c3b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:01:24 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Dec  9 11:01:24 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 56.864s CPU time.
Dec  9 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.918 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[6e7f405e-cfa7-4aac-83ef-0d72d6161245]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:01:24 compute-0 systemd-machined[155790]: Machine qemu-2-instance-00000002 terminated.
Dec  9 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.960 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[200384ca-ab65-428a-b8e4-ac36a4da5fa4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:01:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:24.995 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[47d61d55-90e6-4e57-b60a-2b6d3e21e3b9]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 11, 'rx_bytes': 742, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 18085, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245370, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.020 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[8be8e4ef-9ee3-469c-abbf-375452dbcb5e]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245377, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245377, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.024 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.026 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.033 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.033 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.033 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.034 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.034 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.074 189497 INFO nova.virt.libvirt.driver [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance destroyed successfully.#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.075 189497 DEBUG nova.objects.instance [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'resources' on Instance uuid 1bddf2bf-8932-4428-97d7-7342a7ec414b obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.084 189497 DEBUG nova.compute.manager [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-vif-unplugged-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.085 189497 DEBUG oslo_concurrency.lockutils [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.085 189497 DEBUG oslo_concurrency.lockutils [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.085 189497 DEBUG oslo_concurrency.lockutils [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.086 189497 DEBUG nova.compute.manager [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] No waiting events found dispatching network-vif-unplugged-7819acf8-daa2-4391-96d4-ef33c260f794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.086 189497 DEBUG nova.compute.manager [req-66470ac3-3910-49b0-b1a9-932eec934339 req-6d2b2f2d-d5e9-43b8-95c1-f2ffd6ec0f74 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-vif-unplugged-7819acf8-daa2-4391-96d4-ef33c260f794 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.101 189497 DEBUG nova.virt.libvirt.vif [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-09T10:49:58Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-x2vp5udxgoax-du67okrzyrz6-vnf-c7uowjdwt46l',id=2,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-09T10:50:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-ljrndswf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-09T10:50:04Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  9 11:01:25 compute-0 nova_compute[189493]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09NzE5NzQyNjYxODkxMjIwNjU2Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTcxOTc0MjY2MTg5MTIyMDY1NjM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT03MTk3NDI2NjE4OTEyMjA2NTYzPT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=1bddf2bf-8932-4428-97d7-7342a7ec414b,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.102 189497 DEBUG nova.network.os_vif_util [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "7819acf8-daa2-4391-96d4-ef33c260f794", "address": "fa:16:3e:01:4e:b4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.212", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.172", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7819acf8-da", "ovs_interfaceid": "7819acf8-daa2-4391-96d4-ef33c260f794", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.103 189497 DEBUG nova.network.os_vif_util [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.104 189497 DEBUG os_vif [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.107 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.107 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7819acf8-da, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.109 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.111 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.120 189497 INFO os_vif [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:01:4e:b4,bridge_name='br-int',has_traffic_filtering=True,id=7819acf8-daa2-4391-96d4-ef33c260f794,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap7819acf8-da')#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.121 189497 INFO nova.virt.libvirt.driver [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Deleting instance files /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b_del#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.122 189497 INFO nova.virt.libvirt.driver [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Deletion of /var/lib/nova/instances/1bddf2bf-8932-4428-97d7-7342a7ec414b_del complete#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.226 189497 DEBUG nova.virt.libvirt.host [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.227 189497 INFO nova.virt.libvirt.host [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] UEFI support detected#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.230 189497 INFO nova.compute.manager [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Took 0.46 seconds to destroy the instance on the hypervisor.#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.231 189497 DEBUG oslo.service.loopingcall [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.231 189497 DEBUG nova.compute.manager [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.232 189497 DEBUG nova.network.neutron [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  9 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.294 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 11:01:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:25.295 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  9 11:01:25 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 11:01:25.101 189497 DEBUG nova.virt.libvirt.vif [None req-e481dd27-e4 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:01:25 compute-0 nova_compute[189493]: 2025-12-09 11:01:25.305 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:26 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:01:26.298 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:01:27 compute-0 podman[245394]: 2025-12-09 11:01:27.008494865 +0000 UTC m=+0.139507254 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 11:01:27 compute-0 podman[245395]: 2025-12-09 11:01:27.01393728 +0000 UTC m=+0.146054848 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.135 189497 DEBUG nova.network.neutron [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.170 189497 INFO nova.compute.manager [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Took 1.94 seconds to deallocate network for instance.#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.202 189497 DEBUG nova.compute.manager [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.203 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.204 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.204 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.205 189497 DEBUG nova.compute.manager [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] No waiting events found dispatching network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.206 189497 WARNING nova.compute.manager [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received unexpected event network-vif-plugged-7819acf8-daa2-4391-96d4-ef33c260f794 for instance with vm_state active and task_state deleting.#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.207 189497 DEBUG nova.compute.manager [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Received event network-changed-7819acf8-daa2-4391-96d4-ef33c260f794 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.208 189497 DEBUG nova.compute.manager [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Refreshing instance network info cache due to event network-changed-7819acf8-daa2-4391-96d4-ef33c260f794. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.209 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.209 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.210 189497 DEBUG nova.network.neutron [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Refreshing network info cache for port 7819acf8-daa2-4391-96d4-ef33c260f794 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.242 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.243 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.378 189497 DEBUG nova.network.neutron [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.384 189497 DEBUG nova.compute.provider_tree [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.437 189497 DEBUG nova.scheduler.client.report [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.440 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.471 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.228s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.501 189497 INFO nova.scheduler.client.report [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Deleted allocations for instance 1bddf2bf-8932-4428-97d7-7342a7ec414b#033[00m
Dec  9 11:01:27 compute-0 nova_compute[189493]: 2025-12-09 11:01:27.572 189497 DEBUG oslo_concurrency.lockutils [None req-e481dd27-e40f-43ac-a22e-3f534f6648fd e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "1bddf2bf-8932-4428-97d7-7342a7ec414b" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.811s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:01:28 compute-0 nova_compute[189493]: 2025-12-09 11:01:28.173 189497 DEBUG nova.network.neutron [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:01:28 compute-0 nova_compute[189493]: 2025-12-09 11:01:28.207 189497 DEBUG oslo_concurrency.lockutils [req-d67ce31e-e023-4041-9f65-223bdbf2903f req-9e0cbf2f-3344-4cea-baf6-93f77759c386 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-1bddf2bf-8932-4428-97d7-7342a7ec414b" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:01:29 compute-0 podman[203687]: time="2025-12-09T11:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:01:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:01:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec  9 11:01:30 compute-0 nova_compute[189493]: 2025-12-09 11:01:30.110 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:31 compute-0 openstack_network_exporter[205823]: ERROR   11:01:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:01:31 compute-0 openstack_network_exporter[205823]: ERROR   11:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:01:31 compute-0 openstack_network_exporter[205823]: ERROR   11:01:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:01:31 compute-0 openstack_network_exporter[205823]: ERROR   11:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:01:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:01:31 compute-0 openstack_network_exporter[205823]: ERROR   11:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:01:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:01:31 compute-0 nova_compute[189493]: 2025-12-09 11:01:31.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:01:32 compute-0 nova_compute[189493]: 2025-12-09 11:01:32.438 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:32 compute-0 podman[245433]: 2025-12-09 11:01:32.991532712 +0000 UTC m=+0.131787847 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, distribution-scope=public, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, container_name=openstack_network_exporter, vendor=Red Hat, Inc.)
Dec  9 11:01:33 compute-0 nova_compute[189493]: 2025-12-09 11:01:33.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:01:34 compute-0 nova_compute[189493]: 2025-12-09 11:01:34.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:01:34 compute-0 nova_compute[189493]: 2025-12-09 11:01:34.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:01:35 compute-0 podman[245452]: 2025-12-09 11:01:35.017715158 +0000 UTC m=+0.148171425 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Dec  9 11:01:35 compute-0 nova_compute[189493]: 2025-12-09 11:01:35.113 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:36 compute-0 nova_compute[189493]: 2025-12-09 11:01:36.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:01:36 compute-0 nova_compute[189493]: 2025-12-09 11:01:36.867 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:01:36 compute-0 podman[245479]: 2025-12-09 11:01:36.949392913 +0000 UTC m=+0.089117560 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:01:37 compute-0 nova_compute[189493]: 2025-12-09 11:01:37.441 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:37 compute-0 nova_compute[189493]: 2025-12-09 11:01:37.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:01:38 compute-0 nova_compute[189493]: 2025-12-09 11:01:38.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:01:38 compute-0 nova_compute[189493]: 2025-12-09 11:01:38.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:01:39 compute-0 nova_compute[189493]: 2025-12-09 11:01:39.029 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:01:39 compute-0 nova_compute[189493]: 2025-12-09 11:01:39.030 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:01:39 compute-0 nova_compute[189493]: 2025-12-09 11:01:39.030 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 11:01:40 compute-0 nova_compute[189493]: 2025-12-09 11:01:40.069 189497 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765278085.0672002, 1bddf2bf-8932-4428-97d7-7342a7ec414b => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 11:01:40 compute-0 nova_compute[189493]: 2025-12-09 11:01:40.069 189497 INFO nova.compute.manager [-] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] VM Stopped (Lifecycle Event)#033[00m
Dec  9 11:01:40 compute-0 nova_compute[189493]: 2025-12-09 11:01:40.092 189497 DEBUG nova.compute.manager [None req-98271968-f645-40ff-a03b-c58badc34918 - - - - - -] [instance: 1bddf2bf-8932-4428-97d7-7342a7ec414b] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 11:01:40 compute-0 nova_compute[189493]: 2025-12-09 11:01:40.117 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:42 compute-0 nova_compute[189493]: 2025-12-09 11:01:42.444 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.201 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [{"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.225 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.227 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.228 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.279 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.280 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.280 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.281 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.378 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.460 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.461 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.530 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.532 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.604 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.605 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.689 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.702 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.793 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.795 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.854 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.855 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.912 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.913 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.980 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:43 compute-0 nova_compute[189493]: 2025-12-09 11:01:43.988 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.066 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.068 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.137 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.138 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.207 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.208 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.264 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.669 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.671 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4780MB free_disk=72.13796997070312GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.671 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.672 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.783 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.784 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.784 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.785 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.785 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.923 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.944 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.976 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:01:44 compute-0 nova_compute[189493]: 2025-12-09 11:01:44.977 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.305s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:01:45 compute-0 nova_compute[189493]: 2025-12-09 11:01:45.120 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:45 compute-0 nova_compute[189493]: 2025-12-09 11:01:45.591 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:01:45 compute-0 nova_compute[189493]: 2025-12-09 11:01:45.592 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:01:45 compute-0 podman[245538]: 2025-12-09 11:01:45.996631502 +0000 UTC m=+0.125467739 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  9 11:01:47 compute-0 nova_compute[189493]: 2025-12-09 11:01:47.447 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:48 compute-0 podman[245558]: 2025-12-09 11:01:48.963555163 +0000 UTC m=+0.114198977 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 11:01:50 compute-0 nova_compute[189493]: 2025-12-09 11:01:50.123 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:52 compute-0 nova_compute[189493]: 2025-12-09 11:01:52.451 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:53 compute-0 podman[245584]: 2025-12-09 11:01:53.929682843 +0000 UTC m=+0.087196118 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 11:01:53 compute-0 podman[245583]: 2025-12-09 11:01:53.981036034 +0000 UTC m=+0.128802398 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, vcs-type=git, io.openshift.tags=base rhel9, distribution-scope=public, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, version=9.4, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 11:01:55 compute-0 nova_compute[189493]: 2025-12-09 11:01:55.125 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:57 compute-0 nova_compute[189493]: 2025-12-09 11:01:57.454 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:01:57 compute-0 podman[245623]: 2025-12-09 11:01:57.964268205 +0000 UTC m=+0.104293974 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  9 11:01:57 compute-0 podman[245622]: 2025-12-09 11:01:57.989207261 +0000 UTC m=+0.133208775 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, container_name=ovn_metadata_agent)
Dec  9 11:01:59 compute-0 ovn_controller[97780]: 2025-12-09T11:01:59Z|00053|memory_trim|INFO|Detected inactivity (last active 30025 ms ago): trimming memory
Dec  9 11:01:59 compute-0 podman[203687]: time="2025-12-09T11:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:01:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:01:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  9 11:02:00 compute-0 nova_compute[189493]: 2025-12-09 11:02:00.128 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:01 compute-0 openstack_network_exporter[205823]: ERROR   11:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:02:01 compute-0 openstack_network_exporter[205823]: ERROR   11:02:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:02:01 compute-0 openstack_network_exporter[205823]: ERROR   11:02:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:02:01 compute-0 openstack_network_exporter[205823]: ERROR   11:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:02:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:02:01 compute-0 openstack_network_exporter[205823]: ERROR   11:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:02:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:02:02 compute-0 nova_compute[189493]: 2025-12-09 11:02:02.456 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:03 compute-0 podman[245661]: 2025-12-09 11:02:03.977034797 +0000 UTC m=+0.131825538 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, vcs-type=git, version=9.6, container_name=openstack_network_exporter, managed_by=edpm_ansible, architecture=x86_64, config_id=edpm, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Dec  9 11:02:05 compute-0 nova_compute[189493]: 2025-12-09 11:02:05.131 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:05 compute-0 podman[245681]: 2025-12-09 11:02:05.984570557 +0000 UTC m=+0.132619049 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  9 11:02:07 compute-0 nova_compute[189493]: 2025-12-09 11:02:07.459 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:07 compute-0 podman[245709]: 2025-12-09 11:02:07.975428364 +0000 UTC m=+0.131510650 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 11:02:10 compute-0 nova_compute[189493]: 2025-12-09 11:02:10.134 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:12 compute-0 nova_compute[189493]: 2025-12-09 11:02:12.462 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:15 compute-0 nova_compute[189493]: 2025-12-09 11:02:15.139 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:16 compute-0 podman[245734]: 2025-12-09 11:02:16.799232287 +0000 UTC m=+0.099994630 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, io.buildah.version=1.41.3, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 11:02:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:02:16.994 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:02:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:02:16.995 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:02:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:02:16.996 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:02:17 compute-0 nova_compute[189493]: 2025-12-09 11:02:17.466 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:19 compute-0 podman[245755]: 2025-12-09 11:02:19.972563775 +0000 UTC m=+0.112609927 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:02:20 compute-0 nova_compute[189493]: 2025-12-09 11:02:20.142 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:22 compute-0 nova_compute[189493]: 2025-12-09 11:02:22.467 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.295 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.295 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.295 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.297 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.298 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.303 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'name': 'vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.307 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'name': 'vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.310 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.310 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T11:02:23.311346) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.316 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes volume: 1696 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.320 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.325 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2262 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.325 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.325 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.325 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.326 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.326 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.326 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T11:02:23.326057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.352 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.352 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.353 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.381 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.382 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.382 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.451 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.451 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.451 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.452 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.452 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.453 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.453 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.453 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.453 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.454 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.454 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.454 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.454 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.455 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.456 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.456 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.456 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.456 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.456 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.457 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.457 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T11:02:23.453261) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.458 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T11:02:23.455193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T11:02:23.457048) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.550 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.550 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.551 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.640 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.641 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.641 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.703 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.704 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.704 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.704 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.705 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.707 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 386883662 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.707 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 91523197 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.707 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.latency volume: 560654086 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.707 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 492966519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.707 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 88653492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.708 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 59040938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.706 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T11:02:23.705359) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.709 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.710 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.710 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.710 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.710 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.710 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T11:02:23.706899) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.712 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.712 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T11:02:23.709328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.712 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.713 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.713 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.713 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T11:02:23.711828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.713 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.714 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.714 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.715 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.715 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.717 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T11:02:23.716259) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.744 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/cpu volume: 35010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.770 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/cpu volume: 36170000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.799 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 45220000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.799 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.800 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 22224896 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T11:02:23.800607) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.801 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.803 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.803 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.804 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.805 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.805 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.806 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.807 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.808 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.808 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.810 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.811 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T11:02:23.810159) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.812 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.812 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.813 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.814 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.814 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.815 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.816 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.818 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.819 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.819 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T11:02:23.820139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.821 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 1670377851 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.822 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 9651641 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.823 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.823 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 2223058984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.824 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 10632793 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.824 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.825 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.826 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.826 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.828 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T11:02:23.829608) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.830 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.831 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.832 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.833 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.835 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.835 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes volume: 2398 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.836 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.837 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.837 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.838 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.839 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.839 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.840 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.840 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.841 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.841 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.842 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T11:02:23.835132) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T11:02:23.840187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.843 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.844 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.844 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.844 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.845 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.845 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.846 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.846 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.846 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.846 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.847 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.847 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T11:02:23.846725) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.847 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.847 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.848 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.848 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.848 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.849 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.849 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T11:02:23.849160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.850 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.850 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.850 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.851 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.851 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.851 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T11:02:23.851432) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.852 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T11:02:23.853403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.854 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.854 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.854 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.855 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.855 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.855 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.855 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.856 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T11:02:23.855907) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.856 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.857 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.857 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.857 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.858 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.858 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.858 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.858 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T11:02:23.858170) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.859 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.859 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.860 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.860 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.861 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T11:02:23.860394) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.861 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.861 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.862 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.862 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.863 14 DEBUG ceilometer.compute.pollsters [-] 32dd7fb0-7003-48cc-b688-4b94946c911f/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T11:02:23.862667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.863 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/memory.usage volume: 49.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.863 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.864 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.864 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.864 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.864 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.865 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.866 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:02:23.867 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:02:24 compute-0 podman[245781]: 2025-12-09 11:02:24.943038478 +0000 UTC m=+0.100029461 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, managed_by=edpm_ansible, release-0.7.12=, io.buildah.version=1.29.0, name=ubi9, maintainer=Red Hat, Inc., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, vcs-type=git, container_name=kepler, com.redhat.component=ubi9-container, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  9 11:02:24 compute-0 podman[245782]: 2025-12-09 11:02:24.949036108 +0000 UTC m=+0.101555641 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  9 11:02:25 compute-0 nova_compute[189493]: 2025-12-09 11:02:25.144 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:27 compute-0 nova_compute[189493]: 2025-12-09 11:02:27.470 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:28 compute-0 podman[245818]: 2025-12-09 11:02:28.961409397 +0000 UTC m=+0.096093505 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 11:02:28 compute-0 podman[245819]: 2025-12-09 11:02:28.982910741 +0000 UTC m=+0.113645834 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:02:29 compute-0 podman[203687]: time="2025-12-09T11:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:02:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:02:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4803 "" "Go-http-client/1.1"
Dec  9 11:02:30 compute-0 nova_compute[189493]: 2025-12-09 11:02:30.148 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:31 compute-0 openstack_network_exporter[205823]: ERROR   11:02:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:02:31 compute-0 openstack_network_exporter[205823]: ERROR   11:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:02:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:02:31 compute-0 openstack_network_exporter[205823]: ERROR   11:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:02:31 compute-0 openstack_network_exporter[205823]: ERROR   11:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:02:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:02:31 compute-0 openstack_network_exporter[205823]: ERROR   11:02:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:02:32 compute-0 nova_compute[189493]: 2025-12-09 11:02:32.475 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:33 compute-0 nova_compute[189493]: 2025-12-09 11:02:33.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:02:34 compute-0 nova_compute[189493]: 2025-12-09 11:02:34.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:02:34 compute-0 nova_compute[189493]: 2025-12-09 11:02:34.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:02:34 compute-0 nova_compute[189493]: 2025-12-09 11:02:34.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:02:34 compute-0 podman[245854]: 2025-12-09 11:02:34.973325864 +0000 UTC m=+0.111894487 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, config_id=edpm, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Dec  9 11:02:35 compute-0 nova_compute[189493]: 2025-12-09 11:02:35.151 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:36 compute-0 nova_compute[189493]: 2025-12-09 11:02:36.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:02:36 compute-0 podman[245873]: 2025-12-09 11:02:36.992789593 +0000 UTC m=+0.145426242 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 11:02:37 compute-0 nova_compute[189493]: 2025-12-09 11:02:37.477 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:38 compute-0 podman[245900]: 2025-12-09 11:02:38.981972205 +0000 UTC m=+0.114999410 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 11:02:39 compute-0 nova_compute[189493]: 2025-12-09 11:02:39.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:02:39 compute-0 nova_compute[189493]: 2025-12-09 11:02:39.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:02:40 compute-0 nova_compute[189493]: 2025-12-09 11:02:40.153 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:40 compute-0 nova_compute[189493]: 2025-12-09 11:02:40.215 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:02:40 compute-0 nova_compute[189493]: 2025-12-09 11:02:40.216 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:02:40 compute-0 nova_compute[189493]: 2025-12-09 11:02:40.216 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.447 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.468 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.469 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.469 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.470 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.500 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.501 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.501 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.502 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.682 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.786 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.788 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.854 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.855 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.930 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:41 compute-0 nova_compute[189493]: 2025-12-09 11:02:41.930 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.010 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.016 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.079 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.080 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.151 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.152 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.216 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.218 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.282 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.289 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.349 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.352 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.452 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.453 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.480 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.540 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.542 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.633 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.994 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.995 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4778MB free_disk=72.13796615600586GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.995 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:02:42 compute-0 nova_compute[189493]: 2025-12-09 11:02:42.996 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.316 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.316 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 32dd7fb0-7003-48cc-b688-4b94946c911f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.317 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.317 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.318 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.417 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.438 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.441 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:02:43 compute-0 nova_compute[189493]: 2025-12-09 11:02:43.442 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.446s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:02:45 compute-0 nova_compute[189493]: 2025-12-09 11:02:45.157 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:46 compute-0 podman[245964]: 2025-12-09 11:02:46.968448133 +0000 UTC m=+0.107338105 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=multipathd)
Dec  9 11:02:47 compute-0 nova_compute[189493]: 2025-12-09 11:02:47.483 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:47 compute-0 nova_compute[189493]: 2025-12-09 11:02:47.814 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:02:47 compute-0 nova_compute[189493]: 2025-12-09 11:02:47.815 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:02:50 compute-0 nova_compute[189493]: 2025-12-09 11:02:50.159 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:50 compute-0 podman[245983]: 2025-12-09 11:02:50.951867798 +0000 UTC m=+0.094200664 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 11:02:52 compute-0 nova_compute[189493]: 2025-12-09 11:02:52.485 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:55 compute-0 nova_compute[189493]: 2025-12-09 11:02:55.162 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:55 compute-0 podman[246007]: 2025-12-09 11:02:55.954022785 +0000 UTC m=+0.099770284 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_id=edpm, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec  9 11:02:55 compute-0 podman[246006]: 2025-12-09 11:02:55.954611471 +0000 UTC m=+0.103259827 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, architecture=x86_64, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, release=1214.1726694543, release-0.7.12=, vcs-type=git, io.buildah.version=1.29.0, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, container_name=kepler, distribution-scope=public, version=9.4)
Dec  9 11:02:57 compute-0 nova_compute[189493]: 2025-12-09 11:02:57.490 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:02:59 compute-0 podman[203687]: time="2025-12-09T11:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:02:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:02:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4799 "" "Go-http-client/1.1"
Dec  9 11:02:59 compute-0 podman[246045]: 2025-12-09 11:02:59.959293294 +0000 UTC m=+0.093383804 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 11:02:59 compute-0 podman[246046]: 2025-12-09 11:02:59.985904514 +0000 UTC m=+0.115770520 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  9 11:03:00 compute-0 nova_compute[189493]: 2025-12-09 11:03:00.165 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:01 compute-0 openstack_network_exporter[205823]: ERROR   11:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:03:01 compute-0 openstack_network_exporter[205823]: ERROR   11:03:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:03:01 compute-0 openstack_network_exporter[205823]: ERROR   11:03:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:03:01 compute-0 openstack_network_exporter[205823]: ERROR   11:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:03:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:03:01 compute-0 openstack_network_exporter[205823]: ERROR   11:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:03:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:03:02 compute-0 nova_compute[189493]: 2025-12-09 11:03:02.492 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:05 compute-0 nova_compute[189493]: 2025-12-09 11:03:05.168 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:06 compute-0 podman[246081]: 2025-12-09 11:03:06.002440863 +0000 UTC m=+0.149393818 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, managed_by=edpm_ansible, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, build-date=2025-08-20T13:12:41, distribution-scope=public)
Dec  9 11:03:07 compute-0 nova_compute[189493]: 2025-12-09 11:03:07.496 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:08 compute-0 podman[246101]: 2025-12-09 11:03:08.050721031 +0000 UTC m=+0.191993065 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  9 11:03:09 compute-0 podman[246126]: 2025-12-09 11:03:09.961169491 +0000 UTC m=+0.104879980 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 11:03:10 compute-0 nova_compute[189493]: 2025-12-09 11:03:10.171 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:12 compute-0 nova_compute[189493]: 2025-12-09 11:03:12.498 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:15 compute-0 nova_compute[189493]: 2025-12-09 11:03:15.174 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:16.996 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:03:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:16.997 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:03:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:16.998 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:03:17 compute-0 nova_compute[189493]: 2025-12-09 11:03:17.502 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:17 compute-0 podman[246151]: 2025-12-09 11:03:17.975528073 +0000 UTC m=+0.129194088 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  9 11:03:20 compute-0 nova_compute[189493]: 2025-12-09 11:03:20.177 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:21 compute-0 podman[246172]: 2025-12-09 11:03:21.979011416 +0000 UTC m=+0.122037848 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.505 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.741 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.741 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.743 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.744 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.744 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.746 189497 INFO nova.compute.manager [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Terminating instance#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.747 189497 DEBUG nova.compute.manager [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  9 11:03:22 compute-0 kernel: tapd6164edf-ad (unregistering): left promiscuous mode
Dec  9 11:03:22 compute-0 NetworkManager[56302]: <info>  [1765278202.8036] device (tapd6164edf-ad): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.825 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:22 compute-0 ovn_controller[97780]: 2025-12-09T11:03:22Z|00054|binding|INFO|Releasing lport d6164edf-adb9-4fa5-9e6d-bae85d8af633 from this chassis (sb_readonly=0)
Dec  9 11:03:22 compute-0 ovn_controller[97780]: 2025-12-09T11:03:22Z|00055|binding|INFO|Setting lport d6164edf-adb9-4fa5-9e6d-bae85d8af633 down in Southbound
Dec  9 11:03:22 compute-0 ovn_controller[97780]: 2025-12-09T11:03:22Z|00056|binding|INFO|Removing iface tapd6164edf-ad ovn-installed in OVS
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.828 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.834 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:83:9f:5d 192.168.0.98'], port_security=['fa:16:3e:83:9f:5d 192.168.0.98'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-fel25ona52mn-zi55qxbdeak4-port-7xvtkga34xqd', 'neutron:cidrs': '192.168.0.98/24', 'neutron:device_id': '32dd7fb0-7003-48cc-b688-4b94946c911f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-fel25ona52mn-zi55qxbdeak4-port-7xvtkga34xqd', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.244', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=d6164edf-adb9-4fa5-9e6d-bae85d8af633) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.837 106644 INFO neutron.agent.ovn.metadata.agent [-] Port d6164edf-adb9-4fa5-9e6d-bae85d8af633 in datapath c5af7354-5afe-400a-9e13-5500648117d8 unbound from our chassis#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.841 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.848 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.857 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[8174dc22-5aea-47d1-973c-bd027bc71035]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:03:22 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Dec  9 11:03:22 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 35.116s CPU time.
Dec  9 11:03:22 compute-0 systemd-machined[155790]: Machine qemu-3-instance-00000003 terminated.
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.900 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[a67882a3-07a9-44e1-abd7-2f4750400017]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.905 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[933b6741-042f-4a57-a916-180921175c6a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.938 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[e2d57e7f-35ae-4120-a21d-d8b519a01ec7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.960 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[0998803e-b9e1-4045-b31a-68e5012a6c70]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 13, 'rx_bytes': 742, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 13, 'rx_bytes': 742, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 16329, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 246205, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.975 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.980 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.985 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[d19c0d1d-8e5e-4c39-86f0-e7b607827d6b]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246207, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 246207, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.987 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.990 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:22 compute-0 nova_compute[189493]: 2025-12-09 11:03:22.995 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.997 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.997 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.998 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:03:22 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:22.998 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.046 189497 INFO nova.virt.libvirt.driver [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Instance destroyed successfully.#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.048 189497 DEBUG nova.objects.instance [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'resources' on Instance uuid 32dd7fb0-7003-48cc-b688-4b94946c911f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.064 189497 DEBUG nova.virt.libvirt.vif [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-09T10:55:31Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-fel25ona52mn-zi55qxbdeak4-vnf-r5yma3vxwd5y',id=3,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-09T10:55:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-8nh5c9bf',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-09T10:55:41Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  9 11:03:23 compute-0 nova_compute[189493]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODI4MTU5Njc0NjczMDMwNjUyND09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTgyODE1OTY3NDY3MzAzMDY1MjQ9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04MjgxNTk2NzQ2NzMwMzA2NTI0PT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=32dd7fb0-7003-48cc-b688-4b94946c911f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.064 189497 DEBUG nova.network.os_vif_util [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "address": "fa:16:3e:83:9f:5d", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.98", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.244", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd6164edf-ad", "ovs_interfaceid": "d6164edf-adb9-4fa5-9e6d-bae85d8af633", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.067 189497 DEBUG nova.network.os_vif_util [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.067 189497 DEBUG os_vif [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.070 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.071 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd6164edf-ad, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.074 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:23 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 11:03:23.064 189497 DEBUG nova.virt.libvirt.vif [None req-79c5c013-c9 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.076 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.084 189497 INFO os_vif [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:83:9f:5d,bridge_name='br-int',has_traffic_filtering=True,id=d6164edf-adb9-4fa5-9e6d-bae85d8af633,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapd6164edf-ad')#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.084 189497 INFO nova.virt.libvirt.driver [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Deleting instance files /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f_del#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.085 189497 INFO nova.virt.libvirt.driver [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Deletion of /var/lib/nova/instances/32dd7fb0-7003-48cc-b688-4b94946c911f_del complete#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.162 189497 INFO nova.compute.manager [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Took 0.41 seconds to destroy the instance on the hypervisor.#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.163 189497 DEBUG oslo.service.loopingcall [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.164 189497 DEBUG nova.compute.manager [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.164 189497 DEBUG nova.network.neutron [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.491 189497 DEBUG nova.compute.manager [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-vif-unplugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.492 189497 DEBUG oslo_concurrency.lockutils [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.492 189497 DEBUG oslo_concurrency.lockutils [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.492 189497 DEBUG oslo_concurrency.lockutils [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.493 189497 DEBUG nova.compute.manager [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] No waiting events found dispatching network-vif-unplugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.493 189497 DEBUG nova.compute.manager [req-a6918585-b4aa-401c-b602-c7296ada86c7 req-1f1e4c52-2e1b-4d05-95b9-64951af3dea4 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-vif-unplugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  9 11:03:23 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:23.651 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 11:03:23 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:23.652 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  9 11:03:23 compute-0 nova_compute[189493]: 2025-12-09 11:03:23.654 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.298 189497 DEBUG nova.compute.manager [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-changed-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.299 189497 DEBUG nova.compute.manager [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Refreshing instance network info cache due to event network-changed-d6164edf-adb9-4fa5-9e6d-bae85d8af633. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.299 189497 DEBUG oslo_concurrency.lockutils [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.300 189497 DEBUG oslo_concurrency.lockutils [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.300 189497 DEBUG nova.network.neutron [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Refreshing network info cache for port d6164edf-adb9-4fa5-9e6d-bae85d8af633 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.579 189497 DEBUG nova.network.neutron [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.602 189497 INFO nova.compute.manager [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Took 1.44 seconds to deallocate network for instance.#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.607 189497 INFO nova.network.neutron [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Port d6164edf-adb9-4fa5-9e6d-bae85d8af633 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.607 189497 DEBUG nova.network.neutron [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.640 189497 DEBUG oslo_concurrency.lockutils [req-d9e94483-72b1-4f92-bbf7-4ee739fd639d req-6e7a57a1-d226-4fe2-a9b1-42037a7f96c6 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-32dd7fb0-7003-48cc-b688-4b94946c911f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.666 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.666 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.811 189497 DEBUG nova.compute.provider_tree [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.825 189497 DEBUG nova.scheduler.client.report [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.849 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.183s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.871 189497 INFO nova.scheduler.client.report [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Deleted allocations for instance 32dd7fb0-7003-48cc-b688-4b94946c911f#033[00m
Dec  9 11:03:24 compute-0 nova_compute[189493]: 2025-12-09 11:03:24.929 189497 DEBUG oslo_concurrency.lockutils [None req-79c5c013-c9c6-4ee3-a892-46e3f0bbb07b e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.586 189497 DEBUG nova.compute.manager [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.587 189497 DEBUG oslo_concurrency.lockutils [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.587 189497 DEBUG oslo_concurrency.lockutils [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.587 189497 DEBUG oslo_concurrency.lockutils [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "32dd7fb0-7003-48cc-b688-4b94946c911f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.588 189497 DEBUG nova.compute.manager [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] No waiting events found dispatching network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 11:03:25 compute-0 nova_compute[189493]: 2025-12-09 11:03:25.588 189497 WARNING nova.compute.manager [req-79e11122-54cb-4766-997a-ed889d2827c1 req-d661a2b6-6bc6-445f-8739-03a228b018f0 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Received unexpected event network-vif-plugged-d6164edf-adb9-4fa5-9e6d-bae85d8af633 for instance with vm_state deleted and task_state None.#033[00m
Dec  9 11:03:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:03:25.655 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:03:26 compute-0 podman[246229]: 2025-12-09 11:03:26.974017817 +0000 UTC m=+0.117158518 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=edpm, managed_by=edpm_ansible)
Dec  9 11:03:26 compute-0 podman[246228]: 2025-12-09 11:03:26.981161357 +0000 UTC m=+0.125408887 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vcs-type=git, version=9.4, config_id=edpm, release-0.7.12=, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Dec  9 11:03:27 compute-0 nova_compute[189493]: 2025-12-09 11:03:27.508 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:28 compute-0 nova_compute[189493]: 2025-12-09 11:03:28.074 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:29 compute-0 podman[203687]: time="2025-12-09T11:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:03:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:03:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4809 "" "Go-http-client/1.1"
Dec  9 11:03:30 compute-0 podman[246267]: 2025-12-09 11:03:30.91875371 +0000 UTC m=+0.073060991 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  9 11:03:30 compute-0 podman[246268]: 2025-12-09 11:03:30.96335281 +0000 UTC m=+0.101407777 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=edpm)
Dec  9 11:03:31 compute-0 openstack_network_exporter[205823]: ERROR   11:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:03:31 compute-0 openstack_network_exporter[205823]: ERROR   11:03:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:03:31 compute-0 openstack_network_exporter[205823]: ERROR   11:03:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:03:31 compute-0 openstack_network_exporter[205823]: ERROR   11:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:03:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:03:31 compute-0 openstack_network_exporter[205823]: ERROR   11:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:03:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:03:32 compute-0 nova_compute[189493]: 2025-12-09 11:03:32.511 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:33 compute-0 nova_compute[189493]: 2025-12-09 11:03:33.077 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:33 compute-0 nova_compute[189493]: 2025-12-09 11:03:33.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:03:34 compute-0 nova_compute[189493]: 2025-12-09 11:03:34.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:03:35 compute-0 nova_compute[189493]: 2025-12-09 11:03:35.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:03:36 compute-0 nova_compute[189493]: 2025-12-09 11:03:36.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:03:36 compute-0 nova_compute[189493]: 2025-12-09 11:03:36.838 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:03:36 compute-0 nova_compute[189493]: 2025-12-09 11:03:36.865 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:03:36 compute-0 podman[246306]: 2025-12-09 11:03:36.965223903 +0000 UTC m=+0.117808764 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_id=edpm, architecture=x86_64, name=ubi9-minimal, version=9.6, container_name=openstack_network_exporter)
Dec  9 11:03:37 compute-0 nova_compute[189493]: 2025-12-09 11:03:37.515 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:38 compute-0 nova_compute[189493]: 2025-12-09 11:03:38.043 189497 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765278203.0413747, 32dd7fb0-7003-48cc-b688-4b94946c911f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 11:03:38 compute-0 nova_compute[189493]: 2025-12-09 11:03:38.044 189497 INFO nova.compute.manager [-] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] VM Stopped (Lifecycle Event)#033[00m
Dec  9 11:03:38 compute-0 nova_compute[189493]: 2025-12-09 11:03:38.070 189497 DEBUG nova.compute.manager [None req-c59678a8-1c02-47ce-8444-ac31349f19b0 - - - - - -] [instance: 32dd7fb0-7003-48cc-b688-4b94946c911f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 11:03:38 compute-0 nova_compute[189493]: 2025-12-09 11:03:38.081 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:39 compute-0 podman[246327]: 2025-12-09 11:03:39.044437217 +0000 UTC m=+0.189786806 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Dec  9 11:03:40 compute-0 nova_compute[189493]: 2025-12-09 11:03:40.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:03:40 compute-0 nova_compute[189493]: 2025-12-09 11:03:40.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:03:40 compute-0 nova_compute[189493]: 2025-12-09 11:03:40.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 11:03:40 compute-0 podman[246353]: 2025-12-09 11:03:40.96138925 +0000 UTC m=+0.108706392 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 11:03:41 compute-0 nova_compute[189493]: 2025-12-09 11:03:41.196 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:03:41 compute-0 nova_compute[189493]: 2025-12-09 11:03:41.197 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:03:41 compute-0 nova_compute[189493]: 2025-12-09 11:03:41.208 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 11:03:41 compute-0 nova_compute[189493]: 2025-12-09 11:03:41.209 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.487 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.506 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.507 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.508 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.518 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:42 compute-0 nova_compute[189493]: 2025-12-09 11:03:42.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.069 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.070 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.070 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.070 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.082 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.153 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.234 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.235 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.292 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.293 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.353 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.354 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.433 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.441 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.500 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.502 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.595 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.597 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.690 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.691 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:03:43 compute-0 nova_compute[189493]: 2025-12-09 11:03:43.765 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.135 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.136 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4939MB free_disk=72.16051483154297GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.136 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.136 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.222 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.280 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.293 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.314 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:03:44 compute-0 nova_compute[189493]: 2025-12-09 11:03:44.314 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:03:47 compute-0 nova_compute[189493]: 2025-12-09 11:03:47.520 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:48 compute-0 nova_compute[189493]: 2025-12-09 11:03:48.085 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:48 compute-0 podman[246405]: 2025-12-09 11:03:48.932315403 +0000 UTC m=+0.084274190 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251202, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible)
Dec  9 11:03:49 compute-0 nova_compute[189493]: 2025-12-09 11:03:49.315 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:03:49 compute-0 nova_compute[189493]: 2025-12-09 11:03:49.315 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:03:52 compute-0 nova_compute[189493]: 2025-12-09 11:03:52.524 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:52 compute-0 podman[246427]: 2025-12-09 11:03:52.978505574 +0000 UTC m=+0.115256148 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:03:53 compute-0 nova_compute[189493]: 2025-12-09 11:03:53.087 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:56 compute-0 ovn_controller[97780]: 2025-12-09T11:03:56Z|00057|memory_trim|INFO|Detected inactivity (last active 30019 ms ago): trimming memory
Dec  9 11:03:57 compute-0 nova_compute[189493]: 2025-12-09 11:03:57.528 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:57 compute-0 podman[246452]: 2025-12-09 11:03:57.956029349 +0000 UTC m=+0.097079981 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  9 11:03:57 compute-0 podman[246451]: 2025-12-09 11:03:57.961199007 +0000 UTC m=+0.113164600 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=edpm, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, distribution-scope=public, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container, vendor=Red Hat, Inc., container_name=kepler, managed_by=edpm_ansible, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-type=git, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  9 11:03:58 compute-0 nova_compute[189493]: 2025-12-09 11:03:58.090 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:03:59 compute-0 podman[203687]: time="2025-12-09T11:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:03:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:03:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec  9 11:04:01 compute-0 openstack_network_exporter[205823]: ERROR   11:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:04:01 compute-0 openstack_network_exporter[205823]: ERROR   11:04:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:04:01 compute-0 openstack_network_exporter[205823]: ERROR   11:04:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:04:01 compute-0 openstack_network_exporter[205823]: ERROR   11:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:04:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:04:01 compute-0 openstack_network_exporter[205823]: ERROR   11:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:04:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:04:01 compute-0 systemd-logind[806]: New session 30 of user zuul.
Dec  9 11:04:01 compute-0 systemd[1]: Started Session 30 of User zuul.
Dec  9 11:04:01 compute-0 podman[246493]: 2025-12-09 11:04:01.620389142 +0000 UTC m=+0.088703867 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  9 11:04:01 compute-0 podman[246491]: 2025-12-09 11:04:01.624830641 +0000 UTC m=+0.087254509 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  9 11:04:02 compute-0 nova_compute[189493]: 2025-12-09 11:04:02.529 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:02 compute-0 python3[246702]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 11:04:03 compute-0 nova_compute[189493]: 2025-12-09 11:04:03.093 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:07 compute-0 nova_compute[189493]: 2025-12-09 11:04:07.533 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:07 compute-0 podman[246742]: 2025-12-09 11:04:07.994627666 +0000 UTC m=+0.131598213 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vcs-type=git, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm)
Dec  9 11:04:08 compute-0 nova_compute[189493]: 2025-12-09 11:04:08.097 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:10 compute-0 podman[246762]: 2025-12-09 11:04:10.013176348 +0000 UTC m=+0.154051892 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Dec  9 11:04:11 compute-0 podman[246787]: 2025-12-09 11:04:11.977262038 +0000 UTC m=+0.112387940 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 11:04:12 compute-0 nova_compute[189493]: 2025-12-09 11:04:12.536 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:13 compute-0 nova_compute[189493]: 2025-12-09 11:04:13.100 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:16 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:04:16.997 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:04:16.999 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:04:17.000 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:17 compute-0 nova_compute[189493]: 2025-12-09 11:04:17.539 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:18 compute-0 nova_compute[189493]: 2025-12-09 11:04:18.104 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:20 compute-0 podman[246814]: 2025-12-09 11:04:20.018570854 +0000 UTC m=+0.156539788 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.401 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.403 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.427 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402#033[00m
Dec  9 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.533 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.534 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.551 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368#033[00m
Dec  9 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.552 189497 INFO nova.compute.claims [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Claim successful on node compute-0.ctlplane.example.com#033[00m
Dec  9 11:04:20 compute-0 nova_compute[189493]: 2025-12-09 11:04:20.731 189497 DEBUG nova.compute.provider_tree [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.121 189497 DEBUG nova.scheduler.client.report [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.205 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.671s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.207 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.272 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.298 189497 INFO nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.343 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.425 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.428 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.429 189497 INFO nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Creating image(s)#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.432 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.433 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.435 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.437 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:21 compute-0 nova_compute[189493]: 2025-12-09 11:04:21.439 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.567 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.817 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.920 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.part --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.923 189497 DEBUG nova.virt.images [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] 9d62c0b6-ea01-495f-87e9-b5532d7a4e36 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242#033[00m
Dec  9 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.925 189497 DEBUG nova.privsep.utils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63#033[00m
Dec  9 11:04:22 compute-0 nova_compute[189493]: 2025-12-09 11:04:22.926 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.part /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.106 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.185 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.part /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.converted" returned: 0 in 0.259s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.191 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.283 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f.converted --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.285 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.846s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.296 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.298 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.311 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'name': 'vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.315 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.317 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.318 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.320 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T11:04:23.318566) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.327 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.335 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.337 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.337 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.337 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.338 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.338 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.338 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.339 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T11:04:23.338396) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.379 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.380 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.380 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.399 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.401 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.402 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.417 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.417 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.418 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.418 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.419 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.419 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.419 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.419 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.419 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.420 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.420 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T11:04:23.419855) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.420 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.421 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.421 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.421 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.421 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.422 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.422 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.422 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.422 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.423 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.423 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.423 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.424 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T11:04:23.422217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T11:04:23.424596) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.442 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.518 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.518 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.518 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.538 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.539 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f,backing_fmt=raw /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.586 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f,backing_fmt=raw /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk 1073741824" returned: 0 in 0.047s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.588 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "5bb7c4482f5baf067a2d223774ac0caa815bf93f" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.185s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.589 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.610 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.611 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.612 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.613 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.614 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.614 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.615 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.615 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.616 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.617 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.617 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T11:04:23.616365) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.617 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.619 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.619 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.620 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.621 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.621 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.622 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.622 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T11:04:23.622041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.622 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 492966519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.623 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 88653492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.623 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 59040938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.624 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.624 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.624 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.625 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.626 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.626 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.626 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.626 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.626 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.627 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.627 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.628 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.628 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.628 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.629 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.629 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.630 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.631 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.631 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.631 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.631 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T11:04:23.626920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T11:04:23.630256) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.632 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T11:04:23.632192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.663 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/cpu volume: 38010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.678 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/5bb7c4482f5baf067a2d223774ac0caa815bf93f --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.680 189497 DEBUG nova.virt.disk.api [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Checking if we can resize image /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.681 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.689 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 47020000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.690 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.690 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.691 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.691 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.691 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.692 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.692 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T11:04:23.691691) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.693 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.693 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.694 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.694 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.694 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.695 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.696 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.696 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.696 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.696 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.696 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.697 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T11:04:23.697028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.697 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.698 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.698 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.698 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.699 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.699 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.700 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.700 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.701 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.701 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.701 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.701 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.702 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T11:04:23.701845) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.702 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.702 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 2223058984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.703 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 10632793 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.703 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.704 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.704 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.705 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.705 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.706 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.706 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.706 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.706 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.707 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T11:04:23.707054) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.707 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.708 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.708 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.709 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.709 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.709 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.710 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.710 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T11:04:23.710116) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.710 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.711 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.711 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.711 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.711 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.712 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.712 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.712 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.712 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T11:04:23.712356) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.712 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.713 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.713 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.713 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.713 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.714 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.714 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.714 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.715 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.715 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.715 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.715 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.715 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.716 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T11:04:23.715573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.716 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.716 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.717 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T11:04:23.717523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.718 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.719 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.719 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.720 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T11:04:23.719499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.720 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.721 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.721 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.721 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.721 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.722 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T11:04:23.721522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.721 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.722 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.722 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.723 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.724 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T11:04:23.723476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.724 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.724 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.724 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.725 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.726 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T11:04:23.725514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.726 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.726 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.727 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T11:04:23.727448) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.728 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.728 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.729 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.730 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T11:04:23.729458) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.730 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.730 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.731 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.732 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:04:23.733 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.788 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.790 189497 DEBUG nova.virt.disk.api [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Cannot resize image /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.791 189497 DEBUG nova.objects.instance [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'migration_context' on Instance uuid 44ac2ce0-9161-4b3c-baf9-be45585c5f0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.813 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.814 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.815 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.848 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.915 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.916 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.917 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:23 compute-0 nova_compute[189493]: 2025-12-09 11:04:23.932 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:23 compute-0 podman[246864]: 2025-12-09 11:04:23.953305722 +0000 UTC m=+0.088900433 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.013 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.014 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.062 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.064 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.065 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.128 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.130 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.131 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Ensure instance console log exists: /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.131 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.132 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.133 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.137 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T11:04:07Z,direct_url=<?>,disk_format='qcow2',id=9d62c0b6-ea01-495f-87e9-b5532d7a4e36,min_disk=0,min_ram=0,name='fvt_testing_image',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T11:04:12Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'disk_bus': 'virtio', 'boot_index': 0, 'encryption_secret_uuid': None, 'device_name': '/dev/vda', 'device_type': 'disk', 'guest_format': None, 'size': 0, 'image_id': '9d62c0b6-ea01-495f-87e9-b5532d7a4e36'}], 'ephemerals': [{'encrypted': False, 'encryption_options': None, 'encryption_format': None, 'encryption_secret_uuid': None, 'disk_bus': 'virtio', 'device_name': '/dev/vdb', 'device_type': 'disk', 'guest_format': None, 'size': 1}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.149 189497 WARNING nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.161 189497 DEBUG nova.virt.libvirt.host [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.162 189497 DEBUG nova.virt.libvirt.host [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.171 189497 DEBUG nova.virt.libvirt.host [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.172 189497 DEBUG nova.virt.libvirt.host [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.173 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.173 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Getting desirable topologies for flavor Flavor(created_at=2025-12-09T11:04:15Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='922e7637-0894-48fe-9b2a-1166c1701507',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2025-12-09T11:04:07Z,direct_url=<?>,disk_format='qcow2',id=9d62c0b6-ea01-495f-87e9-b5532d7a4e36,min_disk=0,min_ram=0,name='fvt_testing_image',owner='736bbfddbeea47e3ac9d863ba120b8f2',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2025-12-09T11:04:12Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.174 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.175 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.175 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.175 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.176 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.176 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.177 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.177 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.177 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.178 189497 DEBUG nova.virt.hardware [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.185 189497 DEBUG nova.objects.instance [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'pci_devices' on Instance uuid 44ac2ce0-9161-4b3c-baf9-be45585c5f0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.210 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] End _get_guest_xml xml=<domain type="kvm">
Dec  9 11:04:24 compute-0 nova_compute[189493]:  <uuid>44ac2ce0-9161-4b3c-baf9-be45585c5f0e</uuid>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  <name>instance-00000005</name>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  <memory>524288</memory>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  <vcpu>1</vcpu>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  <metadata>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <nova:name>fvt_testing_server</nova:name>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <nova:creationTime>2025-12-09 11:04:24</nova:creationTime>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <nova:flavor name="fvt_testing_flavor">
Dec  9 11:04:24 compute-0 nova_compute[189493]:        <nova:memory>512</nova:memory>
Dec  9 11:04:24 compute-0 nova_compute[189493]:        <nova:disk>1</nova:disk>
Dec  9 11:04:24 compute-0 nova_compute[189493]:        <nova:swap>0</nova:swap>
Dec  9 11:04:24 compute-0 nova_compute[189493]:        <nova:ephemeral>1</nova:ephemeral>
Dec  9 11:04:24 compute-0 nova_compute[189493]:        <nova:vcpus>1</nova:vcpus>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      </nova:flavor>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <nova:owner>
Dec  9 11:04:24 compute-0 nova_compute[189493]:        <nova:user uuid="e6d3a937c2a74eb0816d9f63820935e0">admin</nova:user>
Dec  9 11:04:24 compute-0 nova_compute[189493]:        <nova:project uuid="736bbfddbeea47e3ac9d863ba120b8f2">admin</nova:project>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      </nova:owner>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <nova:root type="image" uuid="9d62c0b6-ea01-495f-87e9-b5532d7a4e36"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <nova:ports/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    </nova:instance>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  </metadata>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  <sysinfo type="smbios">
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <system>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <entry name="manufacturer">RDO</entry>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <entry name="product">OpenStack Compute</entry>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <entry name="serial">44ac2ce0-9161-4b3c-baf9-be45585c5f0e</entry>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <entry name="uuid">44ac2ce0-9161-4b3c-baf9-be45585c5f0e</entry>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <entry name="family">Virtual Machine</entry>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    </system>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  </sysinfo>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  <os>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <type arch="x86_64" machine="q35">hvm</type>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <boot dev="hd"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <smbios mode="sysinfo"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  </os>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  <features>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <acpi/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <apic/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <vmcoreinfo/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  </features>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  <clock offset="utc">
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <timer name="pit" tickpolicy="delay"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <timer name="rtc" tickpolicy="catchup"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <timer name="hpet" present="no"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  </clock>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  <cpu mode="host-model" match="exact">
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <topology sockets="1" cores="1" threads="1"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  </cpu>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  <devices>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <disk type="file" device="disk">
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <target dev="vda" bus="virtio"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    </disk>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <disk type="file" device="disk">
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <driver name="qemu" type="qcow2" cache="none"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <target dev="vdb" bus="virtio"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    </disk>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <disk type="file" device="cdrom">
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <driver name="qemu" type="raw" cache="none"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <source file="/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.config"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <target dev="sda" bus="sata"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    </disk>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <serial type="pty">
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <log file="/var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/console.log" append="off"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    </serial>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <graphics type="vnc" autoport="yes" listen="::0"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <video>
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <model type="virtio"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    </video>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <input type="tablet" bus="usb"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <rng model="virtio">
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <backend model="random">/dev/urandom</backend>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    </rng>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="pci" model="pcie-root-port"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <controller type="usb" index="0"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    <memballoon model="virtio">
Dec  9 11:04:24 compute-0 nova_compute[189493]:      <stats period="10"/>
Dec  9 11:04:24 compute-0 nova_compute[189493]:    </memballoon>
Dec  9 11:04:24 compute-0 nova_compute[189493]:  </devices>
Dec  9 11:04:24 compute-0 nova_compute[189493]: </domain>
Dec  9 11:04:24 compute-0 nova_compute[189493]: _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.292 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.294 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.294 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.296 189497 INFO nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Using config drive#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.736 189497 INFO nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Creating config drive at /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.config#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.747 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9s0tggx execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:24 compute-0 nova_compute[189493]: 2025-12-09 11:04:24.876 189497 DEBUG oslo_concurrency.processutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpf9s0tggx" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:24 compute-0 systemd-machined[155790]: New machine qemu-5-instance-00000005.
Dec  9 11:04:25 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Dec  9 11:04:25 compute-0 systemd[1]: Starting libvirt proxy daemon...
Dec  9 11:04:25 compute-0 systemd[1]: Started libvirt proxy daemon.
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.148 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765278266.1475823, 44ac2ce0-9161-4b3c-baf9-be45585c5f0e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.150 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] VM Resumed (Lifecycle Event)#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.155 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.155 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.164 189497 INFO nova.virt.libvirt.driver [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Instance spawned successfully.#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.165 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.171 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.188 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.198 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.198 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.199 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.200 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.200 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.201 189497 DEBUG nova.virt.libvirt.driver [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.213 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.214 189497 DEBUG nova.virt.driver [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] Emitting event <LifecycleEvent: 1765278266.1540236, 44ac2ce0-9161-4b3c-baf9-be45585c5f0e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.215 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] VM Started (Lifecycle Event)#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.242 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.254 189497 DEBUG nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.270 189497 INFO nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Took 4.84 seconds to spawn the instance on the hypervisor.#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.271 189497 DEBUG nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.279 189497 INFO nova.compute.manager [None req-bd919016-4d35-4252-9704-133b2c72d336 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] During sync_power_state the instance has a pending task (spawning). Skip.#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.360 189497 INFO nova.compute.manager [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Took 5.87 seconds to build instance.#033[00m
Dec  9 11:04:26 compute-0 nova_compute[189493]: 2025-12-09 11:04:26.390 189497 DEBUG oslo_concurrency.lockutils [None req-d7fb0ed0-a348-4719-927d-c7570da72916 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 5.987s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:27 compute-0 nova_compute[189493]: 2025-12-09 11:04:27.548 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:28 compute-0 nova_compute[189493]: 2025-12-09 11:04:28.112 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:28 compute-0 virtproxyd[246920]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Dec  9 11:04:28 compute-0 virtproxyd[246920]: hostname: compute-0
Dec  9 11:04:28 compute-0 virtproxyd[246920]: End of file while reading data: Input/output error
Dec  9 11:04:28 compute-0 podman[246948]: 2025-12-09 11:04:28.961554236 +0000 UTC m=+0.096906976 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, io.buildah.version=1.29.0, architecture=x86_64, config_id=edpm, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, managed_by=edpm_ansible)
Dec  9 11:04:28 compute-0 podman[246949]: 2025-12-09 11:04:28.984994152 +0000 UTC m=+0.119890970 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, org.label-schema.license=GPLv2)
Dec  9 11:04:29 compute-0 podman[203687]: time="2025-12-09T11:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:04:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:04:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec  9 11:04:31 compute-0 openstack_network_exporter[205823]: ERROR   11:04:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:04:31 compute-0 openstack_network_exporter[205823]: ERROR   11:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:04:31 compute-0 openstack_network_exporter[205823]: ERROR   11:04:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:04:31 compute-0 openstack_network_exporter[205823]: ERROR   11:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:04:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:04:31 compute-0 openstack_network_exporter[205823]: ERROR   11:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:04:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:04:31 compute-0 podman[246986]: 2025-12-09 11:04:31.94128713 +0000 UTC m=+0.088652697 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  9 11:04:31 compute-0 podman[246985]: 2025-12-09 11:04:31.952308374 +0000 UTC m=+0.101821167 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Dec  9 11:04:32 compute-0 nova_compute[189493]: 2025-12-09 11:04:32.551 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:33 compute-0 nova_compute[189493]: 2025-12-09 11:04:33.115 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:34 compute-0 nova_compute[189493]: 2025-12-09 11:04:34.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:35 compute-0 nova_compute[189493]: 2025-12-09 11:04:35.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:36 compute-0 nova_compute[189493]: 2025-12-09 11:04:36.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:36 compute-0 nova_compute[189493]: 2025-12-09 11:04:36.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:37 compute-0 nova_compute[189493]: 2025-12-09 11:04:37.553 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:38 compute-0 nova_compute[189493]: 2025-12-09 11:04:38.117 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:38 compute-0 nova_compute[189493]: 2025-12-09 11:04:38.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:38 compute-0 podman[247025]: 2025-12-09 11:04:38.974026548 +0000 UTC m=+0.110872269 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Dec  9 11:04:40 compute-0 nova_compute[189493]: 2025-12-09 11:04:40.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:40 compute-0 nova_compute[189493]: 2025-12-09 11:04:40.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:04:41 compute-0 podman[247045]: 2025-12-09 11:04:41.08462416 +0000 UTC m=+0.217027023 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller)
Dec  9 11:04:41 compute-0 nova_compute[189493]: 2025-12-09 11:04:41.831 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:04:41 compute-0 nova_compute[189493]: 2025-12-09 11:04:41.831 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:04:41 compute-0 nova_compute[189493]: 2025-12-09 11:04:41.832 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 11:04:42 compute-0 nova_compute[189493]: 2025-12-09 11:04:42.555 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:42 compute-0 podman[247070]: 2025-12-09 11:04:42.578004629 +0000 UTC m=+0.115126983 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 11:04:43 compute-0 nova_compute[189493]: 2025-12-09 11:04:43.119 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.037 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.039 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.040 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.040 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.041 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.043 189497 INFO nova.compute.manager [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Terminating instance#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.046 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "refresh_cache-44ac2ce0-9161-4b3c-baf9-be45585c5f0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.047 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquired lock "refresh_cache-44ac2ce0-9161-4b3c-baf9-be45585c5f0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.047 189497 DEBUG nova.network.neutron [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.250 189497 DEBUG nova.network.neutron [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.389 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.403 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.404 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.404 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.405 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.430 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.431 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.431 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.431 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.543 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.652 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.654 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.736 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.737 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.800 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.802 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.905 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e/disk.eph0 --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.914 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.996 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:44 compute-0 nova_compute[189493]: 2025-12-09 11:04:44.997 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.087 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.089 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.160 189497 DEBUG nova.network.neutron [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.175 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Releasing lock "refresh_cache-44ac2ce0-9161-4b3c-baf9-be45585c5f0e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.176 189497 DEBUG nova.compute.manager [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.188 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.189 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:45 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Dec  9 11:04:45 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 20.624s CPU time.
Dec  9 11:04:45 compute-0 systemd-machined[155790]: Machine qemu-5-instance-00000005 terminated.
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.273 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.282 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.368 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.369 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.440 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.442 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.472 189497 INFO nova.virt.libvirt.driver [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Instance destroyed successfully.#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.474 189497 DEBUG nova.objects.instance [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'resources' on Instance uuid 44ac2ce0-9161-4b3c-baf9-be45585c5f0e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.499 189497 INFO nova.virt.libvirt.driver [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Deleting instance files /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e_del#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.500 189497 INFO nova.virt.libvirt.driver [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Deletion of /var/lib/nova/instances/44ac2ce0-9161-4b3c-baf9-be45585c5f0e_del complete#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.510 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.511 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.577 189497 INFO nova.compute.manager [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.579 189497 DEBUG oslo.service.loopingcall [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.580 189497 DEBUG nova.compute.manager [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.580 189497 DEBUG nova.network.neutron [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.613 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.813 189497 DEBUG nova.network.neutron [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.835 189497 DEBUG nova.network.neutron [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.860 189497 INFO nova.compute.manager [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Took 0.28 seconds to deallocate network for instance.#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.920 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:45 compute-0 nova_compute[189493]: 2025-12-09 11:04:45.922 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.053 189497 DEBUG nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.121 189497 DEBUG nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.122 189497 DEBUG nova.compute.provider_tree [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.210 189497 DEBUG nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.242 189497 DEBUG nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.299 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.301 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4771MB free_disk=72.13198852539062GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.301 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.320 189497 DEBUG nova.compute.provider_tree [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.383 189497 DEBUG nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.452 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.456 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.154s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.504 189497 INFO nova.scheduler.client.report [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Deleted allocations for instance 44ac2ce0-9161-4b3c-baf9-be45585c5f0e#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.559 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.559 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.559 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.560 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.579 189497 DEBUG oslo_concurrency.lockutils [None req-1b51cb23-5a81-4a7a-a880-133562411704 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "44ac2ce0-9161-4b3c-baf9-be45585c5f0e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.541s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.641 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.655 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.681 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.682 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.226s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.682 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.682 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  9 11:04:46 compute-0 nova_compute[189493]: 2025-12-09 11:04:46.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:47 compute-0 nova_compute[189493]: 2025-12-09 11:04:47.559 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:48 compute-0 nova_compute[189493]: 2025-12-09 11:04:48.121 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:49 compute-0 nova_compute[189493]: 2025-12-09 11:04:49.856 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:49 compute-0 nova_compute[189493]: 2025-12-09 11:04:49.857 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:04:50 compute-0 podman[247147]: 2025-12-09 11:04:50.996702686 +0000 UTC m=+0.134950742 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec  9 11:04:52 compute-0 nova_compute[189493]: 2025-12-09 11:04:52.564 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:53 compute-0 nova_compute[189493]: 2025-12-09 11:04:53.125 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:54 compute-0 podman[247168]: 2025-12-09 11:04:54.980242804 +0000 UTC m=+0.121612115 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 11:04:57 compute-0 nova_compute[189493]: 2025-12-09 11:04:57.569 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:58 compute-0 nova_compute[189493]: 2025-12-09 11:04:58.128 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:04:59 compute-0 podman[203687]: time="2025-12-09T11:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:04:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:04:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4808 "" "Go-http-client/1.1"
Dec  9 11:04:59 compute-0 nova_compute[189493]: 2025-12-09 11:04:59.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:04:59 compute-0 nova_compute[189493]: 2025-12-09 11:04:59.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  9 11:04:59 compute-0 nova_compute[189493]: 2025-12-09 11:04:59.867 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  9 11:04:59 compute-0 podman[247189]: 2025-12-09 11:04:59.98941059 +0000 UTC m=+0.122585812 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, io.buildah.version=1.29.0, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, architecture=x86_64, name=ubi9, io.openshift.tags=base rhel9, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4)
Dec  9 11:05:00 compute-0 podman[247190]: 2025-12-09 11:05:00.037933716 +0000 UTC m=+0.163009342 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Dec  9 11:05:00 compute-0 nova_compute[189493]: 2025-12-09 11:05:00.465 189497 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765278285.4533355, 44ac2ce0-9161-4b3c-baf9-be45585c5f0e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 11:05:00 compute-0 nova_compute[189493]: 2025-12-09 11:05:00.466 189497 INFO nova.compute.manager [-] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] VM Stopped (Lifecycle Event)#033[00m
Dec  9 11:05:00 compute-0 nova_compute[189493]: 2025-12-09 11:05:00.498 189497 DEBUG nova.compute.manager [None req-c7990f40-4c23-4d8b-8748-81fc154ada98 - - - - - -] [instance: 44ac2ce0-9161-4b3c-baf9-be45585c5f0e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 11:05:01 compute-0 openstack_network_exporter[205823]: ERROR   11:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:05:01 compute-0 openstack_network_exporter[205823]: ERROR   11:05:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:05:01 compute-0 openstack_network_exporter[205823]: ERROR   11:05:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:05:01 compute-0 openstack_network_exporter[205823]: ERROR   11:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:05:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:05:01 compute-0 openstack_network_exporter[205823]: ERROR   11:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:05:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:05:02 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Dec  9 11:05:02 compute-0 systemd[1]: session-30.scope: Consumed 1.209s CPU time.
Dec  9 11:05:02 compute-0 systemd-logind[806]: Session 30 logged out. Waiting for processes to exit.
Dec  9 11:05:02 compute-0 systemd-logind[806]: Removed session 30.
Dec  9 11:05:02 compute-0 podman[247228]: 2025-12-09 11:05:02.400015287 +0000 UTC m=+0.106928474 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  9 11:05:02 compute-0 podman[247227]: 2025-12-09 11:05:02.40274626 +0000 UTC m=+0.113802749 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202)
Dec  9 11:05:02 compute-0 nova_compute[189493]: 2025-12-09 11:05:02.571 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:03 compute-0 nova_compute[189493]: 2025-12-09 11:05:03.131 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:07 compute-0 nova_compute[189493]: 2025-12-09 11:05:07.575 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:08 compute-0 nova_compute[189493]: 2025-12-09 11:05:08.135 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:08 compute-0 rsyslogd[236818]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Dec  9 11:05:09 compute-0 podman[247266]: 2025-12-09 11:05:09.989917203 +0000 UTC m=+0.125683805 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=edpm, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, vcs-type=git, io.buildah.version=1.33.7, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9)
Dec  9 11:05:12 compute-0 podman[247289]: 2025-12-09 11:05:12.065214411 +0000 UTC m=+0.198257571 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  9 11:05:12 compute-0 nova_compute[189493]: 2025-12-09 11:05:12.577 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:12 compute-0 podman[247315]: 2025-12-09 11:05:12.960678997 +0000 UTC m=+0.105816875 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 11:05:13 compute-0 nova_compute[189493]: 2025-12-09 11:05:13.139 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:05:16.998 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:05:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:05:17.001 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:05:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:05:17.003 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:05:17 compute-0 nova_compute[189493]: 2025-12-09 11:05:17.579 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:18 compute-0 nova_compute[189493]: 2025-12-09 11:05:18.142 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:21 compute-0 podman[247340]: 2025-12-09 11:05:21.973649168 +0000 UTC m=+0.115721728 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Dec  9 11:05:22 compute-0 nova_compute[189493]: 2025-12-09 11:05:22.582 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:23 compute-0 nova_compute[189493]: 2025-12-09 11:05:23.145 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:23 compute-0 systemd-logind[806]: New session 31 of user zuul.
Dec  9 11:05:23 compute-0 systemd[1]: Started Session 31 of User zuul.
Dec  9 11:05:24 compute-0 python3[247540]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 11:05:25 compute-0 podman[247578]: 2025-12-09 11:05:25.950969858 +0000 UTC m=+0.093120432 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:05:27 compute-0 nova_compute[189493]: 2025-12-09 11:05:27.584 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:28 compute-0 nova_compute[189493]: 2025-12-09 11:05:28.148 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:29 compute-0 podman[203687]: time="2025-12-09T11:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:05:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:05:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  9 11:05:30 compute-0 podman[247601]: 2025-12-09 11:05:30.943020616 +0000 UTC m=+0.090071962 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release-0.7.12=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, build-date=2024-09-18T21:23:30, distribution-scope=public, name=ubi9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Dec  9 11:05:30 compute-0 podman[247602]: 2025-12-09 11:05:30.97270504 +0000 UTC m=+0.117578369 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec  9 11:05:31 compute-0 openstack_network_exporter[205823]: ERROR   11:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:05:31 compute-0 openstack_network_exporter[205823]: ERROR   11:05:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:05:31 compute-0 openstack_network_exporter[205823]: ERROR   11:05:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:05:31 compute-0 openstack_network_exporter[205823]: ERROR   11:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:05:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:05:31 compute-0 openstack_network_exporter[205823]: ERROR   11:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:05:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:05:32 compute-0 nova_compute[189493]: 2025-12-09 11:05:32.586 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:32 compute-0 podman[247729]: 2025-12-09 11:05:32.986701999 +0000 UTC m=+0.123130883 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 11:05:32 compute-0 podman[247722]: 2025-12-09 11:05:32.988869636 +0000 UTC m=+0.132421016 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  9 11:05:33 compute-0 nova_compute[189493]: 2025-12-09 11:05:33.151 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:33 compute-0 python3[247847]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 11:05:36 compute-0 nova_compute[189493]: 2025-12-09 11:05:36.863 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:05:36 compute-0 nova_compute[189493]: 2025-12-09 11:05:36.864 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:05:36 compute-0 nova_compute[189493]: 2025-12-09 11:05:36.890 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:05:36 compute-0 nova_compute[189493]: 2025-12-09 11:05:36.892 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:05:36 compute-0 nova_compute[189493]: 2025-12-09 11:05:36.893 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:05:37 compute-0 nova_compute[189493]: 2025-12-09 11:05:37.587 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:38 compute-0 nova_compute[189493]: 2025-12-09 11:05:38.155 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:39 compute-0 nova_compute[189493]: 2025-12-09 11:05:39.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:05:40 compute-0 podman[247888]: 2025-12-09 11:05:40.962553747 +0000 UTC m=+0.111801588 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., name=ubi9-minimal, architecture=x86_64, managed_by=edpm_ansible)
Dec  9 11:05:42 compute-0 nova_compute[189493]: 2025-12-09 11:05:42.590 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.186 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.188 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.188 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.188 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 11:05:43 compute-0 podman[247961]: 2025-12-09 11:05:43.291128115 +0000 UTC m=+0.074810113 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 11:05:43 compute-0 podman[247937]: 2025-12-09 11:05:43.356065778 +0000 UTC m=+0.142839247 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Dec  9 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.870 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.871 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.871 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 11:05:43 compute-0 nova_compute[189493]: 2025-12-09 11:05:43.872 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 11:05:43 compute-0 python3[248128]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.176 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.196 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.197 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.198 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.198 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.227 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.228 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.228 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.229 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.344 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.455 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.457 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.557 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.559 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.593 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.626 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.628 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.706 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.715 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.795 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.796 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.891 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.892 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.950 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:05:47 compute-0 nova_compute[189493]: 2025-12-09 11:05:47.951 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.036 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.190 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.454 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.455 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4838MB free_disk=72.13289260864258GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.456 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.456 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.659 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.660 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.660 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.661 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.726 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.739 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.755 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:05:48 compute-0 nova_compute[189493]: 2025-12-09 11:05:48.756 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.300s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:05:52 compute-0 nova_compute[189493]: 2025-12-09 11:05:52.596 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:52 compute-0 podman[248194]: 2025-12-09 11:05:52.935643977 +0000 UTC m=+0.090361868 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec  9 11:05:53 compute-0 nova_compute[189493]: 2025-12-09 11:05:53.193 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:53 compute-0 nova_compute[189493]: 2025-12-09 11:05:53.400 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:05:53 compute-0 nova_compute[189493]: 2025-12-09 11:05:53.401 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:05:56 compute-0 podman[248215]: 2025-12-09 11:05:56.955238139 +0000 UTC m=+0.102019732 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:05:57 compute-0 nova_compute[189493]: 2025-12-09 11:05:57.597 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:58 compute-0 nova_compute[189493]: 2025-12-09 11:05:58.196 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:05:59 compute-0 podman[203687]: time="2025-12-09T11:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:05:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:05:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4806 "" "Go-http-client/1.1"
Dec  9 11:06:00 compute-0 python3[248411]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter#012 _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Dec  9 11:06:01 compute-0 openstack_network_exporter[205823]: ERROR   11:06:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:06:01 compute-0 openstack_network_exporter[205823]: ERROR   11:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:06:01 compute-0 openstack_network_exporter[205823]: ERROR   11:06:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:06:01 compute-0 openstack_network_exporter[205823]: ERROR   11:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:06:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:06:01 compute-0 openstack_network_exporter[205823]: ERROR   11:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:06:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:06:01 compute-0 podman[248450]: 2025-12-09 11:06:01.964174177 +0000 UTC m=+0.112723381 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, config_id=edpm, release=1214.1726694543, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, release-0.7.12=, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  9 11:06:01 compute-0 podman[248451]: 2025-12-09 11:06:01.96427078 +0000 UTC m=+0.106214252 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  9 11:06:02 compute-0 nova_compute[189493]: 2025-12-09 11:06:02.600 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:03 compute-0 nova_compute[189493]: 2025-12-09 11:06:03.198 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:03 compute-0 podman[248488]: 2025-12-09 11:06:03.925523384 +0000 UTC m=+0.079976557 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251125, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Dec  9 11:06:03 compute-0 podman[248487]: 2025-12-09 11:06:03.942696172 +0000 UTC m=+0.101949251 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  9 11:06:07 compute-0 nova_compute[189493]: 2025-12-09 11:06:07.603 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:08 compute-0 nova_compute[189493]: 2025-12-09 11:06:08.202 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:12 compute-0 podman[248530]: 2025-12-09 11:06:12.018537065 +0000 UTC m=+0.158967878 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, name=ubi9-minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, release=1755695350, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, vendor=Red Hat, Inc., config_id=edpm, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  9 11:06:12 compute-0 nova_compute[189493]: 2025-12-09 11:06:12.607 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:13 compute-0 nova_compute[189493]: 2025-12-09 11:06:13.205 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:13 compute-0 podman[248551]: 2025-12-09 11:06:13.99597039 +0000 UTC m=+0.138964686 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:06:14 compute-0 podman[248552]: 2025-12-09 11:06:14.02742438 +0000 UTC m=+0.165136059 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  9 11:06:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:06:17.001 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:06:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:06:17.003 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:06:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:06:17.004 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:06:17 compute-0 nova_compute[189493]: 2025-12-09 11:06:17.610 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:18 compute-0 nova_compute[189493]: 2025-12-09 11:06:18.209 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:22 compute-0 nova_compute[189493]: 2025-12-09 11:06:22.613 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:23 compute-0 nova_compute[189493]: 2025-12-09 11:06:23.213 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.297 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.299 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.299 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.300 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a75cde150>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.314 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'name': 'vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {'metering.server_group': '24f6e5b2-dd43-46f1-87a4-e2efc1300914'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.321 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.321 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.324 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T11:06:23.322409) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.332 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.340 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2346 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.341 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.342 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.342 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.342 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.343 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.343 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.343 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T11:06:23.343192) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.391 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.391 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.392 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.429 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.429 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.430 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.430 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.430 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.431 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.431 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.431 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.431 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T11:06:23.431113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.433 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.433 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T11:06:23.432812) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T11:06:23.434844) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.516 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.517 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.517 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.630 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.631 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.631 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.632 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.633 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.633 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T11:06:23.632922) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.635 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 492966519 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T11:06:23.634837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.635 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 88653492 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.635 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.latency volume: 59040938 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.636 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.636 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.636 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.637 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.638 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T11:06:23.637664) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.638 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.638 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.638 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.639 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.639 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.639 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.640 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.641 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.641 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.641 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.641 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.642 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.642 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T11:06:23.640473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.643 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T11:06:23.643852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.675 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/cpu volume: 39850000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.707 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 48790000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.708 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.708 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.709 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.709 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.709 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.709 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.710 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.710 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.711 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.711 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T11:06:23.709657) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.711 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.712 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.712 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.713 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.713 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.713 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.714 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.714 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.714 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.714 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.714 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T11:06:23.714451) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.715 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.715 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.716 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.716 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.717 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.718 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.718 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.718 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.719 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.719 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.719 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.719 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 2223058984 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.719 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T11:06:23.719303) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.720 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 10632793 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.720 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.721 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.721 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.721 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.722 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.723 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.723 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.723 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.723 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.723 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.724 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.724 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.725 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T11:06:23.723874) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.726 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.726 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.726 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T11:06:23.726617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.727 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.727 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2384 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.728 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.730 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T11:06:23.729484) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.730 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.730 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.731 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.731 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.732 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.733 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.734 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.734 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T11:06:23.734299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.735 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.736 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.736 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.736 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.738 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.738 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.738 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.739 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T11:06:23.737022) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.740 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.740 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.740 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T11:06:23.740168) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.741 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.741 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.741 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.742 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.742 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.742 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.742 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.743 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 26 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.743 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T11:06:23.742379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.743 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.743 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.743 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.744 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.744 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.744 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.744 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.745 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.745 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.745 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.745 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.745 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.746 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.746 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.746 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.746 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.747 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.748 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T11:06:23.744247) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.compute.pollsters [-] 7b43ca09-ed65-4465-9fcc-95caa6dc9a88/memory.usage volume: 48.953125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.749 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T11:06:23.746047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.750 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.750 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.750 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.750 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T11:06:23.747422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.751 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T11:06:23.749470) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.752 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.753 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:06:23.754 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:06:23 compute-0 podman[248601]: 2025-12-09 11:06:23.994100917 +0000 UTC m=+0.139951032 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Dec  9 11:06:27 compute-0 nova_compute[189493]: 2025-12-09 11:06:27.615 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:27 compute-0 podman[248620]: 2025-12-09 11:06:27.977170495 +0000 UTC m=+0.120846733 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 11:06:28 compute-0 nova_compute[189493]: 2025-12-09 11:06:28.216 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:28 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Dec  9 11:06:29 compute-0 podman[203687]: time="2025-12-09T11:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:06:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:06:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4805 "" "Go-http-client/1.1"
Dec  9 11:06:31 compute-0 openstack_network_exporter[205823]: ERROR   11:06:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:06:31 compute-0 openstack_network_exporter[205823]: ERROR   11:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:06:31 compute-0 openstack_network_exporter[205823]: ERROR   11:06:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:06:31 compute-0 openstack_network_exporter[205823]: ERROR   11:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:06:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:06:31 compute-0 openstack_network_exporter[205823]: ERROR   11:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:06:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:06:32 compute-0 nova_compute[189493]: 2025-12-09 11:06:32.618 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:32 compute-0 podman[248646]: 2025-12-09 11:06:32.972385218 +0000 UTC m=+0.111861480 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Dec  9 11:06:32 compute-0 podman[248645]: 2025-12-09 11:06:32.979883474 +0000 UTC m=+0.122939799 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, vcs-type=git, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.buildah.version=1.29.0, maintainer=Red Hat, Inc., container_name=kepler, distribution-scope=public, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_id=edpm, io.openshift.tags=base rhel9, version=9.4, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  9 11:06:33 compute-0 nova_compute[189493]: 2025-12-09 11:06:33.219 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:34 compute-0 podman[248684]: 2025-12-09 11:06:34.981550891 +0000 UTC m=+0.117032034 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  9 11:06:34 compute-0 podman[248685]: 2025-12-09 11:06:34.993896724 +0000 UTC m=+0.125211408 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Dec  9 11:06:36 compute-0 nova_compute[189493]: 2025-12-09 11:06:36.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:06:37 compute-0 nova_compute[189493]: 2025-12-09 11:06:37.621 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:37 compute-0 nova_compute[189493]: 2025-12-09 11:06:37.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:06:37 compute-0 nova_compute[189493]: 2025-12-09 11:06:37.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:06:38 compute-0 nova_compute[189493]: 2025-12-09 11:06:38.221 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:38 compute-0 nova_compute[189493]: 2025-12-09 11:06:38.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:06:41 compute-0 nova_compute[189493]: 2025-12-09 11:06:41.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:06:42 compute-0 nova_compute[189493]: 2025-12-09 11:06:42.623 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:42 compute-0 podman[248720]: 2025-12-09 11:06:42.961165325 +0000 UTC m=+0.109651431 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, distribution-scope=public, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 11:06:43 compute-0 nova_compute[189493]: 2025-12-09 11:06:43.223 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:44 compute-0 podman[248741]: 2025-12-09 11:06:44.804951483 +0000 UTC m=+0.086013225 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 11:06:44 compute-0 nova_compute[189493]: 2025-12-09 11:06:44.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:06:44 compute-0 nova_compute[189493]: 2025-12-09 11:06:44.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:06:44 compute-0 podman[248742]: 2025-12-09 11:06:44.843190761 +0000 UTC m=+0.123963436 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 11:06:45 compute-0 nova_compute[189493]: 2025-12-09 11:06:45.027 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:06:45 compute-0 nova_compute[189493]: 2025-12-09 11:06:45.028 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:06:45 compute-0 nova_compute[189493]: 2025-12-09 11:06:45.028 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.343 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.366 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.366 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.367 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.367 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.395 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.395 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.396 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.396 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.517 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.594 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.595 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.655 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.655 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.717 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.718 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.774 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.781 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.838 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.839 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.922 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.924 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.983 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:06:46 compute-0 nova_compute[189493]: 2025-12-09 11:06:46.984 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.043 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.416 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.417 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4853MB free_disk=72.13289260864258GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.417 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.417 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.505 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.506 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.506 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.506 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.598 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.617 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.619 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.620 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.202s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:06:47 compute-0 nova_compute[189493]: 2025-12-09 11:06:47.626 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:48 compute-0 nova_compute[189493]: 2025-12-09 11:06:48.226 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:52 compute-0 nova_compute[189493]: 2025-12-09 11:06:52.629 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:53 compute-0 nova_compute[189493]: 2025-12-09 11:06:53.094 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:06:53 compute-0 nova_compute[189493]: 2025-12-09 11:06:53.095 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:06:53 compute-0 nova_compute[189493]: 2025-12-09 11:06:53.228 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:54 compute-0 podman[248816]: 2025-12-09 11:06:54.938435045 +0000 UTC m=+0.086754084 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, container_name=multipathd, org.label-schema.build-date=20251202, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  9 11:06:57 compute-0 nova_compute[189493]: 2025-12-09 11:06:57.632 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:58 compute-0 nova_compute[189493]: 2025-12-09 11:06:58.231 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:06:58 compute-0 podman[248837]: 2025-12-09 11:06:58.95737613 +0000 UTC m=+0.103773168 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 11:06:59 compute-0 podman[203687]: time="2025-12-09T11:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:06:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:06:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4804 "" "Go-http-client/1.1"
Dec  9 11:07:00 compute-0 systemd[1]: session-31.scope: Deactivated successfully.
Dec  9 11:07:00 compute-0 systemd[1]: session-31.scope: Consumed 4.766s CPU time.
Dec  9 11:07:00 compute-0 systemd-logind[806]: Session 31 logged out. Waiting for processes to exit.
Dec  9 11:07:00 compute-0 systemd-logind[806]: Removed session 31.
Dec  9 11:07:01 compute-0 openstack_network_exporter[205823]: ERROR   11:07:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:07:01 compute-0 openstack_network_exporter[205823]: ERROR   11:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:07:01 compute-0 openstack_network_exporter[205823]: ERROR   11:07:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:07:01 compute-0 openstack_network_exporter[205823]: ERROR   11:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:07:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:07:01 compute-0 openstack_network_exporter[205823]: ERROR   11:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:07:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:07:02 compute-0 nova_compute[189493]: 2025-12-09 11:07:02.635 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:03 compute-0 nova_compute[189493]: 2025-12-09 11:07:03.234 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:03 compute-0 podman[248862]: 2025-12-09 11:07:03.967723707 +0000 UTC m=+0.113285196 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, maintainer=Red Hat, Inc., release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, managed_by=edpm_ansible, version=9.4)
Dec  9 11:07:03 compute-0 podman[248863]: 2025-12-09 11:07:03.985030059 +0000 UTC m=+0.115793232 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Dec  9 11:07:05 compute-0 podman[248901]: 2025-12-09 11:07:05.916520107 +0000 UTC m=+0.065610673 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Dec  9 11:07:05 compute-0 podman[248900]: 2025-12-09 11:07:05.934333581 +0000 UTC m=+0.076498586 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  9 11:07:07 compute-0 nova_compute[189493]: 2025-12-09 11:07:07.640 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:08 compute-0 nova_compute[189493]: 2025-12-09 11:07:08.237 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:12 compute-0 nova_compute[189493]: 2025-12-09 11:07:12.643 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:13 compute-0 nova_compute[189493]: 2025-12-09 11:07:13.240 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:13 compute-0 podman[248938]: 2025-12-09 11:07:13.971941033 +0000 UTC m=+0.104968400 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, name=ubi9-minimal, distribution-scope=public, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, config_id=edpm, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Dec  9 11:07:14 compute-0 podman[248958]: 2025-12-09 11:07:14.934627366 +0000 UTC m=+0.081061605 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 11:07:15 compute-0 podman[248981]: 2025-12-09 11:07:15.149904863 +0000 UTC m=+0.181858866 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Dec  9 11:07:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:07:17.003 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:07:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:07:17.004 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:07:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:07:17.005 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:07:17 compute-0 nova_compute[189493]: 2025-12-09 11:07:17.647 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:18 compute-0 nova_compute[189493]: 2025-12-09 11:07:18.243 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:22 compute-0 nova_compute[189493]: 2025-12-09 11:07:22.649 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:23 compute-0 nova_compute[189493]: 2025-12-09 11:07:23.246 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:25 compute-0 podman[249007]: 2025-12-09 11:07:25.944333737 +0000 UTC m=+0.089851465 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=multipathd, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=multipathd, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Dec  9 11:07:27 compute-0 nova_compute[189493]: 2025-12-09 11:07:27.651 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:28 compute-0 nova_compute[189493]: 2025-12-09 11:07:28.249 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:29 compute-0 podman[203687]: time="2025-12-09T11:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:07:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:07:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4807 "" "Go-http-client/1.1"
Dec  9 11:07:29 compute-0 podman[249027]: 2025-12-09 11:07:29.931572043 +0000 UTC m=+0.086391135 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:07:31 compute-0 openstack_network_exporter[205823]: ERROR   11:07:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:07:31 compute-0 openstack_network_exporter[205823]: ERROR   11:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:07:31 compute-0 openstack_network_exporter[205823]: ERROR   11:07:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:07:31 compute-0 openstack_network_exporter[205823]: ERROR   11:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:07:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:07:31 compute-0 openstack_network_exporter[205823]: ERROR   11:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:07:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:07:32 compute-0 nova_compute[189493]: 2025-12-09 11:07:32.654 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:33 compute-0 nova_compute[189493]: 2025-12-09 11:07:33.251 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:34 compute-0 podman[249051]: 2025-12-09 11:07:34.96193125 +0000 UTC m=+0.101867828 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 11:07:34 compute-0 podman[249050]: 2025-12-09 11:07:34.971540811 +0000 UTC m=+0.118621816 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, release=1214.1726694543, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=edpm, maintainer=Red Hat, Inc., architecture=x86_64, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Dec  9 11:07:36 compute-0 podman[249089]: 2025-12-09 11:07:36.94523741 +0000 UTC m=+0.089343872 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec  9 11:07:36 compute-0 podman[249090]: 2025-12-09 11:07:36.968686342 +0000 UTC m=+0.107930257 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, managed_by=edpm_ansible)
Dec  9 11:07:37 compute-0 nova_compute[189493]: 2025-12-09 11:07:37.657 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:37 compute-0 nova_compute[189493]: 2025-12-09 11:07:37.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:07:38 compute-0 nova_compute[189493]: 2025-12-09 11:07:38.254 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:38 compute-0 nova_compute[189493]: 2025-12-09 11:07:38.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:07:38 compute-0 nova_compute[189493]: 2025-12-09 11:07:38.844 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:07:39 compute-0 nova_compute[189493]: 2025-12-09 11:07:39.838 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:07:39 compute-0 nova_compute[189493]: 2025-12-09 11:07:39.838 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:07:42 compute-0 nova_compute[189493]: 2025-12-09 11:07:42.660 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:43 compute-0 nova_compute[189493]: 2025-12-09 11:07:43.257 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:43 compute-0 nova_compute[189493]: 2025-12-09 11:07:43.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:07:44 compute-0 podman[249126]: 2025-12-09 11:07:44.805644447 +0000 UTC m=+0.117418964 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, version=9.6, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Dec  9 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.880 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.880 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.881 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.881 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:07:45 compute-0 podman[249147]: 2025-12-09 11:07:45.931085178 +0000 UTC m=+0.085432480 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 11:07:45 compute-0 nova_compute[189493]: 2025-12-09 11:07:45.974 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:07:45 compute-0 podman[249148]: 2025-12-09 11:07:45.978653499 +0000 UTC m=+0.134764047 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.036 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.038 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.110 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.112 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.190 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.191 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.254 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.265 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.326 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.327 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.403 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.408 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.508 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.510 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.581 189497 DEBUG oslo_concurrency.processutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.976 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.978 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4831MB free_disk=72.13315963745117GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.978 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:07:46 compute-0 nova_compute[189493]: 2025-12-09 11:07:46.978 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.078 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.079 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635#033[00m
Dec  9 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.079 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.079 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.140 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.166 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.168 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.168 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.189s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:07:47 compute-0 nova_compute[189493]: 2025-12-09 11:07:47.663 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:48 compute-0 nova_compute[189493]: 2025-12-09 11:07:48.169 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:07:48 compute-0 nova_compute[189493]: 2025-12-09 11:07:48.170 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:07:48 compute-0 nova_compute[189493]: 2025-12-09 11:07:48.170 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 11:07:48 compute-0 nova_compute[189493]: 2025-12-09 11:07:48.259 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:49 compute-0 nova_compute[189493]: 2025-12-09 11:07:49.320 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:07:49 compute-0 nova_compute[189493]: 2025-12-09 11:07:49.321 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquired lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:07:49 compute-0 nova_compute[189493]: 2025-12-09 11:07:49.321 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004#033[00m
Dec  9 11:07:49 compute-0 nova_compute[189493]: 2025-12-09 11:07:49.322 189497 DEBUG nova.objects.instance [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 11:07:51 compute-0 nova_compute[189493]: 2025-12-09 11:07:51.395 189497 DEBUG nova.network.neutron [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [{"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:07:51 compute-0 nova_compute[189493]: 2025-12-09 11:07:51.423 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Releasing lock "refresh_cache-41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:07:51 compute-0 nova_compute[189493]: 2025-12-09 11:07:51.424 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929#033[00m
Dec  9 11:07:52 compute-0 nova_compute[189493]: 2025-12-09 11:07:52.665 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:52 compute-0 nova_compute[189493]: 2025-12-09 11:07:52.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:07:52 compute-0 nova_compute[189493]: 2025-12-09 11:07:52.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:07:53 compute-0 nova_compute[189493]: 2025-12-09 11:07:53.261 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:57 compute-0 podman[249222]: 2025-12-09 11:07:57.005008891 +0000 UTC m=+0.137259761 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd)
Dec  9 11:07:57 compute-0 nova_compute[189493]: 2025-12-09 11:07:57.668 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:58 compute-0 nova_compute[189493]: 2025-12-09 11:07:58.264 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:07:59 compute-0 podman[203687]: time="2025-12-09T11:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:07:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29523 "" "Go-http-client/1.1"
Dec  9 11:07:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4798 "" "Go-http-client/1.1"
Dec  9 11:08:00 compute-0 podman[249240]: 2025-12-09 11:08:00.944673507 +0000 UTC m=+0.089326481 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:08:01 compute-0 openstack_network_exporter[205823]: ERROR   11:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:08:01 compute-0 openstack_network_exporter[205823]: ERROR   11:08:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:08:01 compute-0 openstack_network_exporter[205823]: ERROR   11:08:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:08:01 compute-0 openstack_network_exporter[205823]: ERROR   11:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:08:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:08:01 compute-0 openstack_network_exporter[205823]: ERROR   11:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:08:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:08:02 compute-0 nova_compute[189493]: 2025-12-09 11:08:02.672 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:03 compute-0 nova_compute[189493]: 2025-12-09 11:08:03.267 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:04 compute-0 nova_compute[189493]: 2025-12-09 11:08:04.996 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:04 compute-0 nova_compute[189493]: 2025-12-09 11:08:04.997 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:04 compute-0 nova_compute[189493]: 2025-12-09 11:08:04.998 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:04 compute-0 nova_compute[189493]: 2025-12-09 11:08:04.999 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:04.999 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.001 189497 INFO nova.compute.manager [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Terminating instance#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.003 189497 DEBUG nova.compute.manager [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  9 11:08:05 compute-0 kernel: tapb903bb84-e1 (unregistering): left promiscuous mode
Dec  9 11:08:05 compute-0 NetworkManager[56302]: <info>  [1765278485.0621] device (tapb903bb84-e1): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.072 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:05 compute-0 ovn_controller[97780]: 2025-12-09T11:08:05Z|00058|binding|INFO|Releasing lport b903bb84-e176-4730-b223-613a9b01712b from this chassis (sb_readonly=0)
Dec  9 11:08:05 compute-0 ovn_controller[97780]: 2025-12-09T11:08:05Z|00059|binding|INFO|Setting lport b903bb84-e176-4730-b223-613a9b01712b down in Southbound
Dec  9 11:08:05 compute-0 ovn_controller[97780]: 2025-12-09T11:08:05Z|00060|binding|INFO|Removing iface tapb903bb84-e1 ovn-installed in OVS
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.082 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.086 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:91:d3:f4 192.168.0.92'], port_security=['fa:16:3e:91:d3:f4 192.168.0.92'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-5eiooafn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-port-rb2sbixhbgrm', 'neutron:cidrs': '192.168.0.92/24', 'neutron:device_id': '7b43ca09-ed65-4465-9fcc-95caa6dc9a88', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-5eiooafn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-port-rb2sbixhbgrm', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.176', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=b903bb84-e176-4730-b223-613a9b01712b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.087 106644 INFO neutron.agent.ovn.metadata.agent [-] Port b903bb84-e176-4730-b223-613a9b01712b in datapath c5af7354-5afe-400a-9e13-5500648117d8 unbound from our chassis#033[00m
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.089 106644 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network c5af7354-5afe-400a-9e13-5500648117d8#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.099 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:05 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Dec  9 11:08:05 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 2min 5.435s CPU time.
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.108 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[7dff2a87-d27c-4aaa-b8fd-c93e902cefa7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:05 compute-0 systemd-machined[155790]: Machine qemu-4-instance-00000004 terminated.
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.147 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[c9db50f6-bada-4d68-a138-d6ae2d713231]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.150 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[6c09d34e-7031-4a2e-90e6-df0515ef1ea6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:05 compute-0 podman[249266]: 2025-12-09 11:08:05.171633195 +0000 UTC m=+0.084397603 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, name=ubi9, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, managed_by=edpm_ansible, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0, release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543)
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.178 239949 DEBUG oslo.privsep.daemon [-] privsep: reply[f93807fd-b508-4ef8-9a2a-300fc3a59ecf]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.198 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[2bd3d124-aaa8-4623-ab9c-e3845e2126f4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapc5af7354-51'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:bf:0d:a0'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 11, 'tx_packets': 15, 'rx_bytes': 742, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396027, 'reachable_time': 22416, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249315, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:05 compute-0 podman[249267]: 2025-12-09 11:08:05.211451393 +0000 UTC m=+0.108273545 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=edpm, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.214 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[d904e5e7-b1e6-4344-b229-5e8c1a116758]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396043, 'tstamp': 396043}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249316, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapc5af7354-51'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 396046, 'tstamp': 396046}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 249316, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.215 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.217 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.224 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.225 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapc5af7354-50, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.225 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.226 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapc5af7354-50, col_values=(('external_ids', {'iface-id': '3eb47070-bc26-4827-a5a8-68152f05129c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:08:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:05.226 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.232 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.238 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.309 189497 INFO nova.virt.libvirt.driver [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Instance destroyed successfully.#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.309 189497 DEBUG nova.objects.instance [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'resources' on Instance uuid 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.336 189497 DEBUG nova.virt.libvirt.vif [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-09T10:57:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-afn7y6w-4mhk6z2gnzo4-cnlzzwhsflo5-vnf-4ifywm3gsfrq',id=4,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-09T10:57:39Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='24f6e5b2-dd43-46f1-87a4-e2efc1300914'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-d2fjtx7u',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,reader,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-09T10:57:39Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTA0MDE3NDY2MzAyNTc2ODM2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKC
Dec  9 11:08:05 compute-0 nova_compute[189493]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTA0MDE3NDY2MzAyNTc2ODM2Mj09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEwNDAxNzQ2NjMwMjU3NjgzNjI9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMDQwMTc0NjYzMDI1NzY4MzYyPT0tLQo=',user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=7b43ca09-ed65-4465-9fcc-95caa6dc9a88,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.337 189497 DEBUG nova.network.os_vif_util [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.176", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.338 189497 DEBUG nova.network.os_vif_util [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.339 189497 DEBUG os_vif [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.343 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.344 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb903bb84-e1, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.347 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.348 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.349 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.357 189497 INFO os_vif [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:91:d3:f4,bridge_name='br-int',has_traffic_filtering=True,id=b903bb84-e176-4730-b223-613a9b01712b,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapb903bb84-e1')#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.358 189497 INFO nova.virt.libvirt.driver [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Deleting instance files /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88_del#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.360 189497 INFO nova.virt.libvirt.driver [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Deletion of /var/lib/nova/instances/7b43ca09-ed65-4465-9fcc-95caa6dc9a88_del complete#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.420 189497 INFO nova.compute.manager [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Took 0.42 seconds to destroy the instance on the hypervisor.#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.421 189497 DEBUG oslo.service.loopingcall [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.422 189497 DEBUG nova.compute.manager [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  9 11:08:05 compute-0 nova_compute[189493]: 2025-12-09 11:08:05.423 189497 DEBUG nova.network.neutron [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  9 11:08:05 compute-0 rsyslogd[236818]: message too long (8192) with configured size 8096, begin of message is: 2025-12-09 11:08:05.336 189497 DEBUG nova.virt.libvirt.vif [None req-ab479ce5-31 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.159 189497 DEBUG nova.compute.manager [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-vif-unplugged-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.159 189497 DEBUG oslo_concurrency.lockutils [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.160 189497 DEBUG oslo_concurrency.lockutils [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.160 189497 DEBUG oslo_concurrency.lockutils [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.160 189497 DEBUG nova.compute.manager [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] No waiting events found dispatching network-vif-unplugged-b903bb84-e176-4730-b223-613a9b01712b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.161 189497 DEBUG nova.compute.manager [req-55d2dae8-a5ed-4886-bed3-4990fd13a247 req-c2cff718-1206-4ceb-9298-ff623fde569f 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-vif-unplugged-b903bb84-e176-4730-b223-613a9b01712b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  9 11:08:07 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:07.371 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.372 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:07 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:07.374 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.485 189497 DEBUG nova.compute.manager [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-changed-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.486 189497 DEBUG nova.compute.manager [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Refreshing instance network info cache due to event network-changed-b903bb84-e176-4730-b223-613a9b01712b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.486 189497 DEBUG oslo_concurrency.lockutils [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.488 189497 DEBUG oslo_concurrency.lockutils [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquired lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.489 189497 DEBUG nova.network.neutron [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Refreshing network info cache for port b903bb84-e176-4730-b223-613a9b01712b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007#033[00m
Dec  9 11:08:07 compute-0 nova_compute[189493]: 2025-12-09 11:08:07.677 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:07 compute-0 podman[249338]: 2025-12-09 11:08:07.981577228 +0000 UTC m=+0.121638464 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec  9 11:08:07 compute-0 podman[249339]: 2025-12-09 11:08:07.981638459 +0000 UTC m=+0.119410865 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Dec  9 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.280 189497 DEBUG nova.compute.manager [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.281 189497 DEBUG oslo_concurrency.lockutils [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.281 189497 DEBUG oslo_concurrency.lockutils [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.282 189497 DEBUG oslo_concurrency.lockutils [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.282 189497 DEBUG nova.compute.manager [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] No waiting events found dispatching network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 11:08:09 compute-0 nova_compute[189493]: 2025-12-09 11:08:09.282 189497 WARNING nova.compute.manager [req-a384c5ae-5b38-46fa-9baa-6b28f0236e7b req-c5902d0f-fc0e-4241-8b93-b21c19f04f64 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Received unexpected event network-vif-plugged-b903bb84-e176-4730-b223-613a9b01712b for instance with vm_state active and task_state deleting.#033[00m
Dec  9 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.348 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.565 189497 DEBUG nova.network.neutron [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.590 189497 INFO nova.compute.manager [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Took 5.17 seconds to deallocate network for instance.#033[00m
Dec  9 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.628 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.629 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.736 189497 DEBUG nova.compute.provider_tree [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.754 189497 DEBUG nova.scheduler.client.report [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.781 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.152s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.808 189497 INFO nova.scheduler.client.report [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Deleted allocations for instance 7b43ca09-ed65-4465-9fcc-95caa6dc9a88#033[00m
Dec  9 11:08:10 compute-0 nova_compute[189493]: 2025-12-09 11:08:10.883 189497 DEBUG oslo_concurrency.lockutils [None req-ab479ce5-31f1-47d8-9c04-3f3562ad7411 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "7b43ca09-ed65-4465-9fcc-95caa6dc9a88" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 5.886s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:11 compute-0 nova_compute[189493]: 2025-12-09 11:08:11.052 189497 DEBUG nova.network.neutron [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updated VIF entry in instance network info cache for port b903bb84-e176-4730-b223-613a9b01712b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482#033[00m
Dec  9 11:08:11 compute-0 nova_compute[189493]: 2025-12-09 11:08:11.054 189497 DEBUG nova.network.neutron [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Updating instance_info_cache with network_info: [{"id": "b903bb84-e176-4730-b223-613a9b01712b", "address": "fa:16:3e:91:d3:f4", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.92", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapb903bb84-e1", "ovs_interfaceid": "b903bb84-e176-4730-b223-613a9b01712b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:08:11 compute-0 nova_compute[189493]: 2025-12-09 11:08:11.076 189497 DEBUG oslo_concurrency.lockutils [req-17ee5bf9-af9e-41e3-a35e-a023b367100c req-0e566dac-72cc-43be-a25f-d5a43cb101f5 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Releasing lock "refresh_cache-7b43ca09-ed65-4465-9fcc-95caa6dc9a88" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333#033[00m
Dec  9 11:08:12 compute-0 nova_compute[189493]: 2025-12-09 11:08:12.681 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:13 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:13.377 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:08:15 compute-0 nova_compute[189493]: 2025-12-09 11:08:15.352 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:15 compute-0 podman[249373]: 2025-12-09 11:08:15.967716935 +0000 UTC m=+0.129519480 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, config_id=edpm, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  9 11:08:16 compute-0 podman[249393]: 2025-12-09 11:08:16.132115024 +0000 UTC m=+0.103679036 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 11:08:16 compute-0 podman[249394]: 2025-12-09 11:08:16.213958829 +0000 UTC m=+0.178438766 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 11:08:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:17.005 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:17.006 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:17.006 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:17 compute-0 nova_compute[189493]: 2025-12-09 11:08:17.685 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:20 compute-0 nova_compute[189493]: 2025-12-09 11:08:20.304 189497 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765278485.302588, 7b43ca09-ed65-4465-9fcc-95caa6dc9a88 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 11:08:20 compute-0 nova_compute[189493]: 2025-12-09 11:08:20.305 189497 INFO nova.compute.manager [-] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] VM Stopped (Lifecycle Event)#033[00m
Dec  9 11:08:20 compute-0 nova_compute[189493]: 2025-12-09 11:08:20.347 189497 DEBUG nova.compute.manager [None req-98b04bb9-8a62-4342-9f22-5deab8d0b28a - - - - - -] [instance: 7b43ca09-ed65-4465-9fcc-95caa6dc9a88] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 11:08:20 compute-0 nova_compute[189493]: 2025-12-09 11:08:20.355 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:22 compute-0 nova_compute[189493]: 2025-12-09 11:08:22.688 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.298 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.299 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.305 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.312 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.314 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'name': 'test_0', 'flavor': {'id': 'cf91b364-8467-4d1e-8c92-f7d1fab99905', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '53d12211-5d5c-4333-b3ee-e3dcf1663767'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'user_id': 'e6d3a937c2a74eb0816d9f63820935e0', 'hostId': '17e7a15a42f56673ff2b1bfd38625d4824c4455b94d5713ec4c3a7ee', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.314 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.314 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.315 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.315 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.316 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2025-12-09T11:08:23.315145) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.321 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes volume: 2430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.321 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.322 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.322 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.322 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.322 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.322 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2025-12-09T11:08:23.322494) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.348 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.349 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2025-12-09T11:08:23.350095) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2025-12-09T11:08:23.351127) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.352 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.353 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2025-12-09T11:08:23.352639) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.421 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.422 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.422 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.423 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.423 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.423 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.424 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.424 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.424 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.425 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.425 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2025-12-09T11:08:23.424453) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.426 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.426 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.426 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.427 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 469600468 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.427 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2025-12-09T11:08:23.426798) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.427 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 78501609 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.428 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.latency volume: 60811824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.428 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.428 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.429 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.430 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.430 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2025-12-09T11:08:23.429588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.430 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.430 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.431 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.433 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.432 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2025-12-09T11:08:23.432441) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.433 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.433 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.434 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2025-12-09T11:08:23.435321) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.461 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/cpu volume: 50530000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.462 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.462 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.462 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.463 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.463 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.463 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.464 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.464 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2025-12-09T11:08:23.463554) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.464 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.464 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.465 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.466 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.467 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2025-12-09T11:08:23.466665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.467 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.468 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.468 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.468 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.468 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.469 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.469 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.469 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.470 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 1299788707 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.470 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2025-12-09T11:08:23.469572) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.470 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 9241063 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.470 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.471 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.471 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.471 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.471 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.472 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.472 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.472 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.472 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2025-12-09T11:08:23.472370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.473 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.473 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.473 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.474 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.474 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.474 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.475 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes volume: 2454 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.475 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2025-12-09T11:08:23.474617) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.475 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.475 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.476 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.476 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.476 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.477 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.477 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2025-12-09T11:08:23.476955) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.477 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.478 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.478 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.478 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.479 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.479 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.479 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.479 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.480 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.480 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.480 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2025-12-09T11:08:23.480011) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.481 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.481 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.481 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.481 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.482 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.482 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.482 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2025-12-09T11:08:23.482306) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.483 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.483 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.483 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.483 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.484 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.484 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.484 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.484 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.485 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2025-12-09T11:08:23.484715) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.485 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.485 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.486 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.486 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.486 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.487 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.487 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2025-12-09T11:08:23.487079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.487 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.488 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.489 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.489 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.490 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.490 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2025-12-09T11:08:23.489541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.491 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.491 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.491 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.492 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.492 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2025-12-09T11:08:23.491731) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.492 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.493 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.493 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.493 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.493 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.494 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.494 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2025-12-09T11:08:23.493938) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.495 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.495 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.495 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.495 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.496 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.496 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.496 14 DEBUG ceilometer.compute.pollsters [-] 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f/memory.usage volume: 48.796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.496 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2025-12-09T11:08:23.496194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.497 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.497 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.497 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.498 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.498 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.498 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.498 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.499 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.500 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:08:23.501 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.598 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.600 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.601 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.601 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.602 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.604 189497 INFO nova.compute.manager [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Terminating instance#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.607 189497 DEBUG nova.compute.manager [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120#033[00m
Dec  9 11:08:24 compute-0 kernel: tap2c684388-b6 (unregistering): left promiscuous mode
Dec  9 11:08:24 compute-0 NetworkManager[56302]: <info>  [1765278504.6700] device (tap2c684388-b6): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.677 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:24 compute-0 ovn_controller[97780]: 2025-12-09T11:08:24Z|00061|binding|INFO|Releasing lport 2c684388-b6d9-4de0-8691-29807fabed2c from this chassis (sb_readonly=0)
Dec  9 11:08:24 compute-0 ovn_controller[97780]: 2025-12-09T11:08:24Z|00062|binding|INFO|Setting lport 2c684388-b6d9-4de0-8691-29807fabed2c down in Southbound
Dec  9 11:08:24 compute-0 ovn_controller[97780]: 2025-12-09T11:08:24Z|00063|binding|INFO|Removing iface tap2c684388-b6 ovn-installed in OVS
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.682 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:24.686 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:c7:65:39 192.168.0.250'], port_security=['fa:16:3e:c7:65:39 192.168.0.250'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.250/24', 'neutron:device_id': '41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-c5af7354-5afe-400a-9e13-5500648117d8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '736bbfddbeea47e3ac9d863ba120b8f2', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'd86dfae4-cfd5-480d-a50e-0084326b1439', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.226'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=61df917c-633f-4b35-857d-39fd859caf35, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>], logical_port=2c684388-b6d9-4de0-8691-29807fabed2c) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7fa01184a610>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 11:08:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:24.689 106644 INFO neutron.agent.ovn.metadata.agent [-] Port 2c684388-b6d9-4de0-8691-29807fabed2c in datapath c5af7354-5afe-400a-9e13-5500648117d8 unbound from our chassis#033[00m
Dec  9 11:08:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:24.691 106644 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network c5af7354-5afe-400a-9e13-5500648117d8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628#033[00m
Dec  9 11:08:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:24.693 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[4c1da532-7239-4e05-ad57-bb2ddea6fffa]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:24 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:24.695 106644 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8 namespace which is not needed anymore#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.712 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:24 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Dec  9 11:08:24 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 27.253s CPU time.
Dec  9 11:08:24 compute-0 systemd-machined[155790]: Machine qemu-1-instance-00000001 terminated.
Dec  9 11:08:24 compute-0 kernel: tap2c684388-b6: entered promiscuous mode
Dec  9 11:08:24 compute-0 kernel: tap2c684388-b6 (unregistering): left promiscuous mode
Dec  9 11:08:24 compute-0 NetworkManager[56302]: <info>  [1765278504.8402] manager: (tap2c684388-b6): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.849 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.905 189497 DEBUG nova.compute.manager [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-unplugged-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.906 189497 DEBUG oslo_concurrency.lockutils [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.906 189497 DEBUG oslo_concurrency.lockutils [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.906 189497 DEBUG oslo_concurrency.lockutils [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.906 189497 DEBUG nova.compute.manager [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] No waiting events found dispatching network-vif-unplugged-2c684388-b6d9-4de0-8691-29807fabed2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.907 189497 DEBUG nova.compute.manager [req-3ddb9c25-dfbf-4fb9-8e3b-54cae939a9ec req-0697aeb9-c748-454c-b7a2-1b63fa424068 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-unplugged-2c684388-b6d9-4de0-8691-29807fabed2c for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.908 189497 INFO nova.virt.libvirt.driver [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Instance destroyed successfully.#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.908 189497 DEBUG nova.objects.instance [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lazy-loading 'resources' on Instance uuid 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.929 189497 DEBUG nova.virt.libvirt.vif [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2025-12-09T10:48:38Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2025-12-09T10:48:53Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='736bbfddbeea47e3ac9d863ba120b8f2',ramdisk_id='',reservation_id='r-o83aar8e',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,admin,member',image_base_image_ref='53d12211-5d5c-4333-b3ee-e3dcf1663767',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2025-12-09T10:48:53Z,user_data=None,user_id='e6d3a937c2a74eb0816d9f63820935e0',uuid=41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.929 189497 DEBUG nova.network.os_vif_util [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converting VIF {"id": "2c684388-b6d9-4de0-8691-29807fabed2c", "address": "fa:16:3e:c7:65:39", "network": {"id": "c5af7354-5afe-400a-9e13-5500648117d8", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.250", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.226", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "736bbfddbeea47e3ac9d863ba120b8f2", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2c684388-b6", "ovs_interfaceid": "2c684388-b6d9-4de0-8691-29807fabed2c", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.930 189497 DEBUG nova.network.os_vif_util [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.930 189497 DEBUG os_vif [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.931 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.932 189497 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2c684388-b6, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.934 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.935 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248#033[00m
Dec  9 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [NOTICE]   (240052) : haproxy version is 2.8.14-c23fe91
Dec  9 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [NOTICE]   (240052) : path to executable is /usr/sbin/haproxy
Dec  9 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [WARNING]  (240052) : Exiting Master process...
Dec  9 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [WARNING]  (240052) : Exiting Master process...
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.939 189497 INFO os_vif [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:c7:65:39,bridge_name='br-int',has_traffic_filtering=True,id=2c684388-b6d9-4de0-8691-29807fabed2c,network=Network(c5af7354-5afe-400a-9e13-5500648117d8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap2c684388-b6')#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.939 189497 INFO nova.virt.libvirt.driver [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Deleting instance files /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f_del#033[00m
Dec  9 11:08:24 compute-0 nova_compute[189493]: 2025-12-09 11:08:24.940 189497 INFO nova.virt.libvirt.driver [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Deletion of /var/lib/nova/instances/41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f_del complete#033[00m
Dec  9 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [ALERT]    (240052) : Current worker (240054) exited with code 143 (Terminated)
Dec  9 11:08:24 compute-0 neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8[240047]: [WARNING]  (240052) : All workers exited. Exiting... (0)
Dec  9 11:08:24 compute-0 systemd[1]: libpod-c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157.scope: Deactivated successfully.
Dec  9 11:08:24 compute-0 podman[249476]: 2025-12-09 11:08:24.950044327 +0000 UTC m=+0.079413782 container died c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Dec  9 11:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157-userdata-shm.mount: Deactivated successfully.
Dec  9 11:08:25 compute-0 systemd[1]: var-lib-containers-storage-overlay-2b24b7ae1cc8b90219deedb86d3b48361a8607a5826e7fa3b48e4b1d97a56504-merged.mount: Deactivated successfully.
Dec  9 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.012 189497 INFO nova.compute.manager [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Took 0.40 seconds to destroy the instance on the hypervisor.#033[00m
Dec  9 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.013 189497 DEBUG oslo.service.loopingcall [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435#033[00m
Dec  9 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.013 189497 DEBUG nova.compute.manager [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259#033[00m
Dec  9 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.013 189497 DEBUG nova.network.neutron [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803#033[00m
Dec  9 11:08:25 compute-0 podman[249476]: 2025-12-09 11:08:25.02030671 +0000 UTC m=+0.149676165 container cleanup c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Dec  9 11:08:25 compute-0 systemd[1]: libpod-conmon-c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157.scope: Deactivated successfully.
Dec  9 11:08:25 compute-0 podman[249516]: 2025-12-09 11:08:25.11419229 +0000 UTC m=+0.061756752 container remove c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec  9 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.127 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[ed28aa47-4a02-47df-92a6-e94a6111ce73]: (4, ('Tue Dec  9 11:08:24 AM UTC 2025 Stopping container neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8 (c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157)\nc6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157\nTue Dec  9 11:08:25 AM UTC 2025 Deleting container neutron-haproxy-ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8 (c6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157)\nc6a5b789a411de92d3d1addce50ffbf14ba551d1a46a6adcd83e6bfbca83d157\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.130 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[c0069371-1547-4ab3-ba53-80fd97020efc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.131 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapc5af7354-50, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.133 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:25 compute-0 kernel: tapc5af7354-50: left promiscuous mode
Dec  9 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.137 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.140 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[ba89abbe-7a4a-45bd-a0ef-7def6eda965d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:25 compute-0 nova_compute[189493]: 2025-12-09 11:08:25.154 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.165 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[91d13395-43e4-4c0c-9a18-b696bc7c075c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.167 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[fe621261-ee73-40f6-a2fd-ce5816e0e618]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.189 239934 DEBUG oslo.privsep.daemon [-] privsep: reply[ba133564-7547-4ba7-b2e4-939e38cf9e85]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 396015, 'reachable_time': 31644, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 249531, 'error': None, 'target': 'ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:25 compute-0 systemd[1]: run-netns-ovnmeta\x2dc5af7354\x2d5afe\x2d400a\x2d9e13\x2d5500648117d8.mount: Deactivated successfully.
Dec  9 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.208 106757 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-c5af7354-5afe-400a-9e13-5500648117d8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607#033[00m
Dec  9 11:08:25 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:08:25.210 106757 DEBUG oslo.privsep.daemon [-] privsep: reply[7af3d7c3-b58f-4516-a999-e92749772f38]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.058 189497 DEBUG nova.network.neutron [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.081 189497 INFO nova.compute.manager [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Took 1.07 seconds to deallocate network for instance.#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.129 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.129 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.212 189497 DEBUG nova.compute.provider_tree [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.232 189497 DEBUG nova.scheduler.client.report [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.258 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.129s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.284 189497 INFO nova.scheduler.client.report [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Deleted allocations for instance 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.352 189497 DEBUG oslo_concurrency.lockutils [None req-f87787f4-2eb2-4e6f-bd0d-a388c51b4da2 e6d3a937c2a74eb0816d9f63820935e0 736bbfddbeea47e3ac9d863ba120b8f2 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.753s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.995 189497 DEBUG nova.compute.manager [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.996 189497 DEBUG oslo_concurrency.lockutils [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Acquiring lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.996 189497 DEBUG oslo_concurrency.lockutils [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.997 189497 DEBUG oslo_concurrency.lockutils [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] Lock "41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.997 189497 DEBUG nova.compute.manager [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] No waiting events found dispatching network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.998 189497 WARNING nova.compute.manager [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received unexpected event network-vif-plugged-2c684388-b6d9-4de0-8691-29807fabed2c for instance with vm_state deleted and task_state None.#033[00m
Dec  9 11:08:26 compute-0 nova_compute[189493]: 2025-12-09 11:08:26.998 189497 DEBUG nova.compute.manager [req-9692f678-dfca-40a8-b3e5-e7dac6c8ee30 req-da9c7a50-4f8c-4abb-b77f-ff227deb711d 61c5464f61f740f4a4c94bb34936a7b9 4f9ddc74cdc0415cbd72e04f405f79e8 - - default default] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Received event network-vif-deleted-2c684388-b6d9-4de0-8691-29807fabed2c external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048#033[00m
Dec  9 11:08:27 compute-0 nova_compute[189493]: 2025-12-09 11:08:27.692 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:27 compute-0 podman[249533]: 2025-12-09 11:08:27.980004963 +0000 UTC m=+0.123721808 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251202)
Dec  9 11:08:29 compute-0 podman[203687]: time="2025-12-09T11:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:08:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:08:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4334 "" "Go-http-client/1.1"
Dec  9 11:08:29 compute-0 nova_compute[189493]: 2025-12-09 11:08:29.936 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:31 compute-0 openstack_network_exporter[205823]: ERROR   11:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:08:31 compute-0 openstack_network_exporter[205823]: ERROR   11:08:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:08:31 compute-0 openstack_network_exporter[205823]: ERROR   11:08:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:08:31 compute-0 openstack_network_exporter[205823]: ERROR   11:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:08:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:08:31 compute-0 openstack_network_exporter[205823]: ERROR   11:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:08:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:08:31 compute-0 podman[249552]: 2025-12-09 11:08:31.960037163 +0000 UTC m=+0.107593589 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 11:08:32 compute-0 nova_compute[189493]: 2025-12-09 11:08:32.694 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:34 compute-0 nova_compute[189493]: 2025-12-09 11:08:34.940 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:35 compute-0 podman[249578]: 2025-12-09 11:08:35.938676408 +0000 UTC m=+0.088984283 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=edpm, container_name=ceilometer_agent_ipmi, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Dec  9 11:08:35 compute-0 podman[249577]: 2025-12-09 11:08:35.950362842 +0000 UTC m=+0.101900908 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, distribution-scope=public, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, release=1214.1726694543, maintainer=Red Hat, Inc., release-0.7.12=, config_id=edpm)
Dec  9 11:08:37 compute-0 nova_compute[189493]: 2025-12-09 11:08:37.696 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:38 compute-0 nova_compute[189493]: 2025-12-09 11:08:38.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:08:38 compute-0 nova_compute[189493]: 2025-12-09 11:08:38.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:08:38 compute-0 nova_compute[189493]: 2025-12-09 11:08:38.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:08:38 compute-0 podman[249613]: 2025-12-09 11:08:38.917891228 +0000 UTC m=+0.068558060 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  9 11:08:38 compute-0 podman[249614]: 2025-12-09 11:08:38.942146761 +0000 UTC m=+0.091584502 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Dec  9 11:08:39 compute-0 nova_compute[189493]: 2025-12-09 11:08:39.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:08:39 compute-0 nova_compute[189493]: 2025-12-09 11:08:39.899 189497 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1765278504.8977005, 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653#033[00m
Dec  9 11:08:39 compute-0 nova_compute[189493]: 2025-12-09 11:08:39.899 189497 INFO nova.compute.manager [-] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] VM Stopped (Lifecycle Event)#033[00m
Dec  9 11:08:39 compute-0 nova_compute[189493]: 2025-12-09 11:08:39.944 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:39 compute-0 nova_compute[189493]: 2025-12-09 11:08:39.964 189497 DEBUG nova.compute.manager [None req-e0a859fd-beb7-4372-a98b-92e2d985e9fc - - - - - -] [instance: 41a113e3-19cd-4b8c-9aa6-00f09cd6ce3f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762#033[00m
Dec  9 11:08:42 compute-0 nova_compute[189493]: 2025-12-09 11:08:42.700 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:44 compute-0 nova_compute[189493]: 2025-12-09 11:08:44.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:08:44 compute-0 nova_compute[189493]: 2025-12-09 11:08:44.949 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:45 compute-0 nova_compute[189493]: 2025-12-09 11:08:45.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:08:45 compute-0 nova_compute[189493]: 2025-12-09 11:08:45.889 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:45 compute-0 nova_compute[189493]: 2025-12-09 11:08:45.889 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:45 compute-0 nova_compute[189493]: 2025-12-09 11:08:45.889 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:45 compute-0 nova_compute[189493]: 2025-12-09 11:08:45.890 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.252 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.254 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5363MB free_disk=72.17672348022461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.255 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.255 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.566 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.568 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.599 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.618 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.646 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:08:46 compute-0 nova_compute[189493]: 2025-12-09 11:08:46.647 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.392s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:08:46 compute-0 podman[249651]: 2025-12-09 11:08:46.970610935 +0000 UTC m=+0.107043923 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 11:08:47 compute-0 podman[249650]: 2025-12-09 11:08:47.006159263 +0000 UTC m=+0.139720746 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-type=git, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_id=edpm, container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, version=9.6, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container)
Dec  9 11:08:47 compute-0 podman[249652]: 2025-12-09 11:08:47.020686592 +0000 UTC m=+0.150535769 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller)
Dec  9 11:08:47 compute-0 nova_compute[189493]: 2025-12-09 11:08:47.649 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:08:47 compute-0 nova_compute[189493]: 2025-12-09 11:08:47.650 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:08:47 compute-0 nova_compute[189493]: 2025-12-09 11:08:47.678 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 11:08:47 compute-0 nova_compute[189493]: 2025-12-09 11:08:47.706 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:47 compute-0 nova_compute[189493]: 2025-12-09 11:08:47.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:08:49 compute-0 nova_compute[189493]: 2025-12-09 11:08:49.953 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:52 compute-0 nova_compute[189493]: 2025-12-09 11:08:52.707 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:52 compute-0 nova_compute[189493]: 2025-12-09 11:08:52.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:08:52 compute-0 nova_compute[189493]: 2025-12-09 11:08:52.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:08:54 compute-0 nova_compute[189493]: 2025-12-09 11:08:54.958 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:55 compute-0 ovn_controller[97780]: 2025-12-09T11:08:55Z|00064|memory_trim|INFO|Detected inactivity (last active 30009 ms ago): trimming memory
Dec  9 11:08:57 compute-0 nova_compute[189493]: 2025-12-09 11:08:57.710 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:08:59 compute-0 podman[249722]: 2025-12-09 11:08:59.025270263 +0000 UTC m=+0.164194595 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 11:08:59 compute-0 podman[203687]: time="2025-12-09T11:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:08:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:08:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4338 "" "Go-http-client/1.1"
Dec  9 11:08:59 compute-0 nova_compute[189493]: 2025-12-09 11:08:59.964 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:01 compute-0 openstack_network_exporter[205823]: ERROR   11:09:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:09:01 compute-0 openstack_network_exporter[205823]: ERROR   11:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:09:01 compute-0 openstack_network_exporter[205823]: ERROR   11:09:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:09:01 compute-0 openstack_network_exporter[205823]: ERROR   11:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:09:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:09:01 compute-0 openstack_network_exporter[205823]: ERROR   11:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:09:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:09:02 compute-0 nova_compute[189493]: 2025-12-09 11:09:02.713 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:02 compute-0 podman[249742]: 2025-12-09 11:09:02.961191622 +0000 UTC m=+0.105630657 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 11:09:04 compute-0 nova_compute[189493]: 2025-12-09 11:09:04.974 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:06 compute-0 podman[249765]: 2025-12-09 11:09:06.984420529 +0000 UTC m=+0.123271247 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, distribution-scope=public, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, architecture=x86_64)
Dec  9 11:09:06 compute-0 podman[249766]: 2025-12-09 11:09:06.989563604 +0000 UTC m=+0.122423906 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Dec  9 11:09:07 compute-0 nova_compute[189493]: 2025-12-09 11:09:07.719 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:09 compute-0 podman[249801]: 2025-12-09 11:09:09.941354229 +0000 UTC m=+0.092638288 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Dec  9 11:09:09 compute-0 podman[249802]: 2025-12-09 11:09:09.961127525 +0000 UTC m=+0.097252569 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2)
Dec  9 11:09:09 compute-0 nova_compute[189493]: 2025-12-09 11:09:09.980 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:12 compute-0 nova_compute[189493]: 2025-12-09 11:09:12.721 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:14 compute-0 nova_compute[189493]: 2025-12-09 11:09:14.985 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:09:17.006 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:09:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:09:17.007 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:09:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:09:17.007 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:09:17 compute-0 nova_compute[189493]: 2025-12-09 11:09:17.726 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:17 compute-0 podman[249838]: 2025-12-09 11:09:17.995984205 +0000 UTC m=+0.140832365 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, name=ubi9-minimal, distribution-scope=public, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=edpm, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, architecture=x86_64, release=1755695350, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  9 11:09:17 compute-0 podman[249839]: 2025-12-09 11:09:17.997643109 +0000 UTC m=+0.144560343 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:09:18 compute-0 podman[249840]: 2025-12-09 11:09:18.023509513 +0000 UTC m=+0.152443718 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 11:09:19 compute-0 nova_compute[189493]: 2025-12-09 11:09:19.990 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:22 compute-0 nova_compute[189493]: 2025-12-09 11:09:22.731 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:24 compute-0 nova_compute[189493]: 2025-12-09 11:09:24.995 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:27 compute-0 nova_compute[189493]: 2025-12-09 11:09:27.745 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:29 compute-0 podman[203687]: time="2025-12-09T11:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:09:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:09:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Dec  9 11:09:30 compute-0 podman[249908]: 2025-12-09 11:09:30.000227277 +0000 UTC m=+0.152203641 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, container_name=multipathd, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 11:09:30 compute-0 nova_compute[189493]: 2025-12-09 11:09:30.002 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:31 compute-0 openstack_network_exporter[205823]: ERROR   11:09:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:09:31 compute-0 openstack_network_exporter[205823]: ERROR   11:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:09:31 compute-0 openstack_network_exporter[205823]: ERROR   11:09:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:09:31 compute-0 openstack_network_exporter[205823]: ERROR   11:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:09:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:09:31 compute-0 openstack_network_exporter[205823]: ERROR   11:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:09:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:09:32 compute-0 nova_compute[189493]: 2025-12-09 11:09:32.747 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:33 compute-0 podman[249927]: 2025-12-09 11:09:33.976689864 +0000 UTC m=+0.116211453 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 11:09:35 compute-0 nova_compute[189493]: 2025-12-09 11:09:35.011 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:37 compute-0 nova_compute[189493]: 2025-12-09 11:09:37.749 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:37 compute-0 podman[249951]: 2025-12-09 11:09:37.974433166 +0000 UTC m=+0.114092538 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=edpm, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, vendor=Red Hat, Inc., architecture=x86_64, managed_by=edpm_ansible, name=ubi9, io.openshift.expose-services=, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.tags=base rhel9, container_name=kepler, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  9 11:09:38 compute-0 podman[249952]: 2025-12-09 11:09:38.015415524 +0000 UTC m=+0.147062907 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Dec  9 11:09:38 compute-0 nova_compute[189493]: 2025-12-09 11:09:38.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:40 compute-0 nova_compute[189493]: 2025-12-09 11:09:40.016 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:40 compute-0 nova_compute[189493]: 2025-12-09 11:09:40.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:40 compute-0 nova_compute[189493]: 2025-12-09 11:09:40.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:40 compute-0 podman[249988]: 2025-12-09 11:09:40.942444543 +0000 UTC m=+0.092129046 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Dec  9 11:09:40 compute-0 podman[249989]: 2025-12-09 11:09:40.949237381 +0000 UTC m=+0.098757530 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  9 11:09:41 compute-0 nova_compute[189493]: 2025-12-09 11:09:41.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:41 compute-0 nova_compute[189493]: 2025-12-09 11:09:41.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:42 compute-0 nova_compute[189493]: 2025-12-09 11:09:42.752 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:44 compute-0 nova_compute[189493]: 2025-12-09 11:09:44.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.500 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.876 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.877 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.877 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:09:45 compute-0 nova_compute[189493]: 2025-12-09 11:09:45.877 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.210 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.211 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5364MB free_disk=72.17672348022461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.211 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.212 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.650 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.650 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.783 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.900 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.901 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.918 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.947 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.970 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.985 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.986 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:09:46 compute-0 nova_compute[189493]: 2025-12-09 11:09:46.986 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:09:47 compute-0 nova_compute[189493]: 2025-12-09 11:09:47.757 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.533 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.534 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.849 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.849 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.850 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 11:09:48 compute-0 nova_compute[189493]: 2025-12-09 11:09:48.869 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 11:09:48 compute-0 podman[250026]: 2025-12-09 11:09:48.965229357 +0000 UTC m=+0.105658513 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, release=1755695350, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9-minimal, config_id=edpm, architecture=x86_64, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git)
Dec  9 11:09:48 compute-0 podman[250027]: 2025-12-09 11:09:48.988099963 +0000 UTC m=+0.120544938 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 11:09:49 compute-0 podman[250028]: 2025-12-09 11:09:48.999954167 +0000 UTC m=+0.127327467 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  9 11:09:50 compute-0 nova_compute[189493]: 2025-12-09 11:09:50.503 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:52 compute-0 nova_compute[189493]: 2025-12-09 11:09:52.761 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:52 compute-0 nova_compute[189493]: 2025-12-09 11:09:52.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:52 compute-0 nova_compute[189493]: 2025-12-09 11:09:52.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  9 11:09:54 compute-0 nova_compute[189493]: 2025-12-09 11:09:54.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:54 compute-0 nova_compute[189493]: 2025-12-09 11:09:54.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:09:54 compute-0 nova_compute[189493]: 2025-12-09 11:09:54.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:55 compute-0 nova_compute[189493]: 2025-12-09 11:09:55.506 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:56 compute-0 nova_compute[189493]: 2025-12-09 11:09:56.636 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:09:57 compute-0 nova_compute[189493]: 2025-12-09 11:09:57.763 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:09:59 compute-0 podman[203687]: time="2025-12-09T11:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:09:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:09:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4333 "" "Go-http-client/1.1"
Dec  9 11:10:00 compute-0 nova_compute[189493]: 2025-12-09 11:10:00.510 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:01 compute-0 podman[250099]: 2025-12-09 11:10:01.010311952 +0000 UTC m=+0.152052684 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  9 11:10:01 compute-0 openstack_network_exporter[205823]: ERROR   11:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:10:01 compute-0 openstack_network_exporter[205823]: ERROR   11:10:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:10:01 compute-0 openstack_network_exporter[205823]: ERROR   11:10:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:10:01 compute-0 openstack_network_exporter[205823]: ERROR   11:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:10:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:10:01 compute-0 openstack_network_exporter[205823]: ERROR   11:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:10:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:10:02 compute-0 nova_compute[189493]: 2025-12-09 11:10:02.765 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:04 compute-0 podman[250117]: 2025-12-09 11:10:04.956448587 +0000 UTC m=+0.102737965 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 11:10:05 compute-0 nova_compute[189493]: 2025-12-09 11:10:05.512 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:07 compute-0 nova_compute[189493]: 2025-12-09 11:10:07.767 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:08 compute-0 podman[250141]: 2025-12-09 11:10:08.96960544 +0000 UTC m=+0.110462041 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi)
Dec  9 11:10:08 compute-0 podman[250140]: 2025-12-09 11:10:08.987243277 +0000 UTC m=+0.133345376 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, io.openshift.tags=base rhel9, container_name=kepler, distribution-scope=public, name=ubi9, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, build-date=2024-09-18T21:23:30, version=9.4)
Dec  9 11:10:10 compute-0 nova_compute[189493]: 2025-12-09 11:10:10.515 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:11 compute-0 podman[250176]: 2025-12-09 11:10:11.977407872 +0000 UTC m=+0.119765657 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  9 11:10:11 compute-0 podman[250177]: 2025-12-09 11:10:11.995617914 +0000 UTC m=+0.130015208 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=edpm)
Dec  9 11:10:12 compute-0 nova_compute[189493]: 2025-12-09 11:10:12.770 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:13 compute-0 nova_compute[189493]: 2025-12-09 11:10:13.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:10:13 compute-0 nova_compute[189493]: 2025-12-09 11:10:13.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  9 11:10:13 compute-0 nova_compute[189493]: 2025-12-09 11:10:13.860 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  9 11:10:15 compute-0 nova_compute[189493]: 2025-12-09 11:10:15.518 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:10:17.008 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:10:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:10:17.009 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:10:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:10:17.010 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:10:17 compute-0 nova_compute[189493]: 2025-12-09 11:10:17.774 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:19 compute-0 podman[250216]: 2025-12-09 11:10:19.573675052 +0000 UTC m=+0.104737498 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 11:10:19 compute-0 podman[250215]: 2025-12-09 11:10:19.57399563 +0000 UTC m=+0.113793608 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, architecture=x86_64, config_id=edpm, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Dec  9 11:10:19 compute-0 podman[250217]: 2025-12-09 11:10:19.602908068 +0000 UTC m=+0.122837829 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Dec  9 11:10:20 compute-0 nova_compute[189493]: 2025-12-09 11:10:20.521 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:22 compute-0 nova_compute[189493]: 2025-12-09 11:10:22.778 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.299 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.300 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.302 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.303 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.305 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.306 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.307 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.308 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.311 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.309 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a78c21610>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.316 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:10:23.317 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:10:25 compute-0 nova_compute[189493]: 2025-12-09 11:10:25.525 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:27 compute-0 nova_compute[189493]: 2025-12-09 11:10:27.781 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:29 compute-0 podman[203687]: time="2025-12-09T11:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:10:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:10:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Dec  9 11:10:30 compute-0 nova_compute[189493]: 2025-12-09 11:10:30.527 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:31 compute-0 openstack_network_exporter[205823]: ERROR   11:10:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:10:31 compute-0 openstack_network_exporter[205823]: ERROR   11:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:10:31 compute-0 openstack_network_exporter[205823]: ERROR   11:10:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:10:31 compute-0 openstack_network_exporter[205823]: ERROR   11:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:10:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:10:31 compute-0 openstack_network_exporter[205823]: ERROR   11:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:10:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:10:31 compute-0 podman[250287]: 2025-12-09 11:10:31.988123263 +0000 UTC m=+0.134726583 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=multipathd, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, tcib_managed=true)
Dec  9 11:10:32 compute-0 nova_compute[189493]: 2025-12-09 11:10:32.787 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:35 compute-0 nova_compute[189493]: 2025-12-09 11:10:35.530 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:35 compute-0 podman[250307]: 2025-12-09 11:10:35.950248773 +0000 UTC m=+0.084172973 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 11:10:37 compute-0 nova_compute[189493]: 2025-12-09 11:10:37.790 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:38 compute-0 nova_compute[189493]: 2025-12-09 11:10:38.861 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:10:39 compute-0 podman[250331]: 2025-12-09 11:10:39.950144015 +0000 UTC m=+0.092321680 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi)
Dec  9 11:10:39 compute-0 podman[250330]: 2025-12-09 11:10:39.980724756 +0000 UTC m=+0.120432855 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., config_id=edpm, container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Dec  9 11:10:40 compute-0 nova_compute[189493]: 2025-12-09 11:10:40.535 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:40 compute-0 nova_compute[189493]: 2025-12-09 11:10:40.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:10:41 compute-0 nova_compute[189493]: 2025-12-09 11:10:41.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:10:42 compute-0 nova_compute[189493]: 2025-12-09 11:10:42.793 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:42 compute-0 nova_compute[189493]: 2025-12-09 11:10:42.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:10:42 compute-0 podman[250366]: 2025-12-09 11:10:42.960093113 +0000 UTC m=+0.104007950 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Dec  9 11:10:42 compute-0 podman[250367]: 2025-12-09 11:10:42.987732596 +0000 UTC m=+0.127579084 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=edpm, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true)
Dec  9 11:10:45 compute-0 nova_compute[189493]: 2025-12-09 11:10:45.539 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:45 compute-0 nova_compute[189493]: 2025-12-09 11:10:45.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:10:47 compute-0 nova_compute[189493]: 2025-12-09 11:10:47.795 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:47 compute-0 nova_compute[189493]: 2025-12-09 11:10:47.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:10:47 compute-0 nova_compute[189493]: 2025-12-09 11:10:47.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.082 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.083 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.084 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.085 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.518 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.519 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5370MB free_disk=72.17672348022461GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.519 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:10:48 compute-0 nova_compute[189493]: 2025-12-09 11:10:48.519 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.435 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.435 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.573 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.615 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.617 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:10:49 compute-0 nova_compute[189493]: 2025-12-09 11:10:49.617 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.098s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:10:49 compute-0 podman[250407]: 2025-12-09 11:10:49.98417487 +0000 UTC m=+0.124052251 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 11:10:49 compute-0 podman[250406]: 2025-12-09 11:10:49.987730424 +0000 UTC m=+0.142065818 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.buildah.version=1.33.7, managed_by=edpm_ansible, container_name=openstack_network_exporter, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., architecture=x86_64, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, version=9.6)
Dec  9 11:10:50 compute-0 podman[250408]: 2025-12-09 11:10:50.070625033 +0000 UTC m=+0.206349684 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 11:10:50 compute-0 nova_compute[189493]: 2025-12-09 11:10:50.541 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:51 compute-0 nova_compute[189493]: 2025-12-09 11:10:51.617 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:10:51 compute-0 nova_compute[189493]: 2025-12-09 11:10:51.618 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:10:51 compute-0 nova_compute[189493]: 2025-12-09 11:10:51.618 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 11:10:51 compute-0 nova_compute[189493]: 2025-12-09 11:10:51.696 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 11:10:52 compute-0 nova_compute[189493]: 2025-12-09 11:10:52.799 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:55 compute-0 nova_compute[189493]: 2025-12-09 11:10:55.545 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:55 compute-0 nova_compute[189493]: 2025-12-09 11:10:55.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:10:55 compute-0 nova_compute[189493]: 2025-12-09 11:10:55.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:10:57 compute-0 nova_compute[189493]: 2025-12-09 11:10:57.801 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:10:59 compute-0 podman[203687]: time="2025-12-09T11:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:10:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:10:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec  9 11:11:00 compute-0 nova_compute[189493]: 2025-12-09 11:11:00.548 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:01 compute-0 openstack_network_exporter[205823]: ERROR   11:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:11:01 compute-0 openstack_network_exporter[205823]: ERROR   11:11:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:11:01 compute-0 openstack_network_exporter[205823]: ERROR   11:11:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:11:01 compute-0 openstack_network_exporter[205823]: ERROR   11:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:11:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:11:01 compute-0 openstack_network_exporter[205823]: ERROR   11:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:11:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:11:02 compute-0 nova_compute[189493]: 2025-12-09 11:11:02.804 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:02 compute-0 podman[250474]: 2025-12-09 11:11:02.968294888 +0000 UTC m=+0.114662252 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, config_id=multipathd, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Dec  9 11:11:05 compute-0 nova_compute[189493]: 2025-12-09 11:11:05.553 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:06 compute-0 podman[250494]: 2025-12-09 11:11:06.961045519 +0000 UTC m=+0.114566260 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 11:11:07 compute-0 nova_compute[189493]: 2025-12-09 11:11:07.812 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:10 compute-0 nova_compute[189493]: 2025-12-09 11:11:10.557 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:10 compute-0 podman[250517]: 2025-12-09 11:11:10.954432087 +0000 UTC m=+0.100957229 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  9 11:11:10 compute-0 podman[250516]: 2025-12-09 11:11:10.993678748 +0000 UTC m=+0.146273480 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., release-0.7.12=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_id=edpm, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.openshift.expose-services=)
Dec  9 11:11:12 compute-0 nova_compute[189493]: 2025-12-09 11:11:12.814 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:13 compute-0 podman[250552]: 2025-12-09 11:11:13.955367908 +0000 UTC m=+0.089955338 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Dec  9 11:11:13 compute-0 podman[250553]: 2025-12-09 11:11:13.982334312 +0000 UTC m=+0.108325653 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Dec  9 11:11:15 compute-0 nova_compute[189493]: 2025-12-09 11:11:15.561 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:11:17.010 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:11:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:11:17.011 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:11:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:11:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:11:17 compute-0 nova_compute[189493]: 2025-12-09 11:11:17.816 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:20 compute-0 nova_compute[189493]: 2025-12-09 11:11:20.563 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:20 compute-0 podman[250591]: 2025-12-09 11:11:20.931999877 +0000 UTC m=+0.077349443 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:11:20 compute-0 podman[250590]: 2025-12-09 11:11:20.963541693 +0000 UTC m=+0.116674624 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vendor=Red Hat, Inc., config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, com.redhat.component=ubi9-minimal-container, version=9.6, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, config_id=edpm, io.buildah.version=1.33.7)
Dec  9 11:11:20 compute-0 podman[250597]: 2025-12-09 11:11:20.983102592 +0000 UTC m=+0.118494043 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Dec  9 11:11:22 compute-0 nova_compute[189493]: 2025-12-09 11:11:22.820 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:25 compute-0 nova_compute[189493]: 2025-12-09 11:11:25.566 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:27 compute-0 nova_compute[189493]: 2025-12-09 11:11:27.823 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:29 compute-0 podman[203687]: time="2025-12-09T11:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:11:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:11:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec  9 11:11:30 compute-0 nova_compute[189493]: 2025-12-09 11:11:30.570 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:31 compute-0 openstack_network_exporter[205823]: ERROR   11:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:11:31 compute-0 openstack_network_exporter[205823]: ERROR   11:11:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:11:31 compute-0 openstack_network_exporter[205823]: ERROR   11:11:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:11:31 compute-0 openstack_network_exporter[205823]: ERROR   11:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:11:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:11:31 compute-0 openstack_network_exporter[205823]: ERROR   11:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:11:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:11:32 compute-0 nova_compute[189493]: 2025-12-09 11:11:32.826 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:33 compute-0 podman[250656]: 2025-12-09 11:11:33.960161602 +0000 UTC m=+0.109659499 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=multipathd, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  9 11:11:35 compute-0 nova_compute[189493]: 2025-12-09 11:11:35.573 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:37 compute-0 nova_compute[189493]: 2025-12-09 11:11:37.829 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:37 compute-0 podman[250676]: 2025-12-09 11:11:37.963352921 +0000 UTC m=+0.105432747 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Dec  9 11:11:40 compute-0 nova_compute[189493]: 2025-12-09 11:11:40.577 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:40 compute-0 nova_compute[189493]: 2025-12-09 11:11:40.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:11:41 compute-0 podman[250700]: 2025-12-09 11:11:41.9835817 +0000 UTC m=+0.119929222 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Dec  9 11:11:42 compute-0 podman[250699]: 2025-12-09 11:11:42.000050606 +0000 UTC m=+0.139942362 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, vcs-type=git, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release=1214.1726694543, config_id=edpm, version=9.4, io.buildah.version=1.29.0, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, name=ubi9)
Dec  9 11:11:42 compute-0 nova_compute[189493]: 2025-12-09 11:11:42.832 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:42 compute-0 nova_compute[189493]: 2025-12-09 11:11:42.835 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:11:42 compute-0 nova_compute[189493]: 2025-12-09 11:11:42.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:11:42 compute-0 nova_compute[189493]: 2025-12-09 11:11:42.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:11:44 compute-0 podman[250736]: 2025-12-09 11:11:44.742189233 +0000 UTC m=+0.071733963 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  9 11:11:44 compute-0 podman[250737]: 2025-12-09 11:11:44.742447131 +0000 UTC m=+0.065006286 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251125)
Dec  9 11:11:44 compute-0 nova_compute[189493]: 2025-12-09 11:11:44.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:11:45 compute-0 nova_compute[189493]: 2025-12-09 11:11:45.579 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:45 compute-0 nova_compute[189493]: 2025-12-09 11:11:45.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:11:47 compute-0 nova_compute[189493]: 2025-12-09 11:11:47.834 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:48 compute-0 nova_compute[189493]: 2025-12-09 11:11:48.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:11:48 compute-0 nova_compute[189493]: 2025-12-09 11:11:48.988 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:11:48 compute-0 nova_compute[189493]: 2025-12-09 11:11:48.989 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:11:48 compute-0 nova_compute[189493]: 2025-12-09 11:11:48.989 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:11:48 compute-0 nova_compute[189493]: 2025-12-09 11:11:48.990 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.349 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.350 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5371MB free_disk=72.17669677734375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.351 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.351 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.446 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.447 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.490 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.509 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.510 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:11:49 compute-0 nova_compute[189493]: 2025-12-09 11:11:49.511 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:11:50 compute-0 nova_compute[189493]: 2025-12-09 11:11:50.510 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:11:50 compute-0 nova_compute[189493]: 2025-12-09 11:11:50.581 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:51 compute-0 nova_compute[189493]: 2025-12-09 11:11:51.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:11:51 compute-0 nova_compute[189493]: 2025-12-09 11:11:51.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:11:51 compute-0 nova_compute[189493]: 2025-12-09 11:11:51.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 11:11:51 compute-0 nova_compute[189493]: 2025-12-09 11:11:51.861 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 11:11:51 compute-0 podman[250777]: 2025-12-09 11:11:51.975875109 +0000 UTC m=+0.117922148 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, build-date=2025-08-20T13:12:41, config_id=edpm, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.openshift.expose-services=, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.tags=minimal rhel9, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-type=git, vendor=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6)
Dec  9 11:11:51 compute-0 podman[250778]: 2025-12-09 11:11:51.999028293 +0000 UTC m=+0.134479907 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 11:11:52 compute-0 podman[250779]: 2025-12-09 11:11:52.018855909 +0000 UTC m=+0.149147916 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  9 11:11:52 compute-0 nova_compute[189493]: 2025-12-09 11:11:52.837 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:55 compute-0 nova_compute[189493]: 2025-12-09 11:11:55.583 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:55 compute-0 nova_compute[189493]: 2025-12-09 11:11:55.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:11:55 compute-0 nova_compute[189493]: 2025-12-09 11:11:55.841 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:11:57 compute-0 nova_compute[189493]: 2025-12-09 11:11:57.840 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:11:59 compute-0 podman[203687]: time="2025-12-09T11:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:11:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:11:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec  9 11:12:00 compute-0 nova_compute[189493]: 2025-12-09 11:12:00.585 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:01 compute-0 openstack_network_exporter[205823]: ERROR   11:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:12:01 compute-0 openstack_network_exporter[205823]: ERROR   11:12:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:12:01 compute-0 openstack_network_exporter[205823]: ERROR   11:12:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:12:01 compute-0 openstack_network_exporter[205823]: ERROR   11:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:12:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:12:01 compute-0 openstack_network_exporter[205823]: ERROR   11:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:12:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:12:02 compute-0 nova_compute[189493]: 2025-12-09 11:12:02.844 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:05 compute-0 podman[250839]: 2025-12-09 11:12:05.017521173 +0000 UTC m=+0.147233335 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, container_name=multipathd, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_managed=true, config_id=multipathd)
Dec  9 11:12:05 compute-0 nova_compute[189493]: 2025-12-09 11:12:05.589 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:07 compute-0 nova_compute[189493]: 2025-12-09 11:12:07.847 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:08 compute-0 podman[250859]: 2025-12-09 11:12:08.974118815 +0000 UTC m=+0.123406324 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:12:10 compute-0 nova_compute[189493]: 2025-12-09 11:12:10.592 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:12 compute-0 nova_compute[189493]: 2025-12-09 11:12:12.849 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:12 compute-0 podman[250884]: 2025-12-09 11:12:12.970131194 +0000 UTC m=+0.107050720 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible)
Dec  9 11:12:13 compute-0 podman[250883]: 2025-12-09 11:12:13.013941506 +0000 UTC m=+0.155587427 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, config_id=edpm, architecture=x86_64, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=)
Dec  9 11:12:14 compute-0 podman[250920]: 2025-12-09 11:12:14.954503435 +0000 UTC m=+0.101434830 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 11:12:14 compute-0 podman[250921]: 2025-12-09 11:12:14.957667509 +0000 UTC m=+0.100692361 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, org.label-schema.build-date=20251125, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  9 11:12:15 compute-0 nova_compute[189493]: 2025-12-09 11:12:15.595 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:12:17.011 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:12:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:12:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:12:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:12:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:12:17 compute-0 nova_compute[189493]: 2025-12-09 11:12:17.854 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:20 compute-0 nova_compute[189493]: 2025-12-09 11:12:20.598 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:22 compute-0 nova_compute[189493]: 2025-12-09 11:12:22.858 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:22 compute-0 podman[250958]: 2025-12-09 11:12:22.936725331 +0000 UTC m=+0.088364205 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, distribution-scope=public, release=1755695350, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, io.openshift.tags=minimal rhel9, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Dec  9 11:12:22 compute-0 podman[250959]: 2025-12-09 11:12:22.939183786 +0000 UTC m=+0.085574971 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 11:12:22 compute-0 podman[250960]: 2025-12-09 11:12:22.969205592 +0000 UTC m=+0.120626820 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.300 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.300 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.300 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.304 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.304 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.315 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.316 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.316 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.321 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:12:23.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:12:25 compute-0 nova_compute[189493]: 2025-12-09 11:12:25.601 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:27 compute-0 nova_compute[189493]: 2025-12-09 11:12:27.868 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:29 compute-0 podman[203687]: time="2025-12-09T11:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:12:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:12:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4331 "" "Go-http-client/1.1"
Dec  9 11:12:30 compute-0 nova_compute[189493]: 2025-12-09 11:12:30.604 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:31 compute-0 openstack_network_exporter[205823]: ERROR   11:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:12:31 compute-0 openstack_network_exporter[205823]: ERROR   11:12:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:12:31 compute-0 openstack_network_exporter[205823]: ERROR   11:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:12:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:12:31 compute-0 openstack_network_exporter[205823]: ERROR   11:12:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:12:31 compute-0 openstack_network_exporter[205823]: ERROR   11:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:12:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:12:32 compute-0 nova_compute[189493]: 2025-12-09 11:12:32.868 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:35 compute-0 nova_compute[189493]: 2025-12-09 11:12:35.606 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:35 compute-0 podman[251025]: 2025-12-09 11:12:35.948996125 +0000 UTC m=+0.098797942 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=multipathd, org.label-schema.vendor=CentOS, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd)
Dec  9 11:12:37 compute-0 nova_compute[189493]: 2025-12-09 11:12:37.870 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:39 compute-0 podman[251045]: 2025-12-09 11:12:39.957839472 +0000 UTC m=+0.107539263 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 11:12:40 compute-0 nova_compute[189493]: 2025-12-09 11:12:40.609 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:41 compute-0 nova_compute[189493]: 2025-12-09 11:12:41.844 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:12:42 compute-0 nova_compute[189493]: 2025-12-09 11:12:42.837 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:12:42 compute-0 nova_compute[189493]: 2025-12-09 11:12:42.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:12:42 compute-0 nova_compute[189493]: 2025-12-09 11:12:42.873 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:43 compute-0 nova_compute[189493]: 2025-12-09 11:12:43.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:12:43 compute-0 podman[251066]: 2025-12-09 11:12:43.939445637 +0000 UTC m=+0.092146495 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, version=9.4, config_id=edpm, name=ubi9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, vcs-type=git, container_name=kepler, io.buildah.version=1.29.0, distribution-scope=public, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Dec  9 11:12:43 compute-0 podman[251067]: 2025-12-09 11:12:43.96103472 +0000 UTC m=+0.098705040 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=edpm, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Dec  9 11:12:45 compute-0 nova_compute[189493]: 2025-12-09 11:12:45.613 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:45 compute-0 podman[251102]: 2025-12-09 11:12:45.945230717 +0000 UTC m=+0.089165415 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Dec  9 11:12:45 compute-0 podman[251103]: 2025-12-09 11:12:45.969420359 +0000 UTC m=+0.107684707 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, config_id=edpm, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Dec  9 11:12:47 compute-0 nova_compute[189493]: 2025-12-09 11:12:47.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:12:47 compute-0 nova_compute[189493]: 2025-12-09 11:12:47.876 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:49 compute-0 nova_compute[189493]: 2025-12-09 11:12:49.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:12:50 compute-0 nova_compute[189493]: 2025-12-09 11:12:50.616 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:50 compute-0 nova_compute[189493]: 2025-12-09 11:12:50.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:12:52 compute-0 nova_compute[189493]: 2025-12-09 11:12:52.877 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.329 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.330 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.330 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.331 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.725 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.726 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5385MB free_disk=72.17579650878906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.727 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:12:53 compute-0 nova_compute[189493]: 2025-12-09 11:12:53.727 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:12:53 compute-0 podman[251145]: 2025-12-09 11:12:53.919054139 +0000 UTC m=+0.065015235 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Dec  9 11:12:53 compute-0 podman[251144]: 2025-12-09 11:12:53.935135106 +0000 UTC m=+0.076978912 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Dec  9 11:12:54 compute-0 podman[251146]: 2025-12-09 11:12:54.000585731 +0000 UTC m=+0.137757704 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible)
Dec  9 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.121 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.122 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.194 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.235 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.237 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:12:54 compute-0 nova_compute[189493]: 2025-12-09 11:12:54.237 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.510s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:12:55 compute-0 nova_compute[189493]: 2025-12-09 11:12:55.621 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.239 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.240 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.241 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.277 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:12:57 compute-0 nova_compute[189493]: 2025-12-09 11:12:57.880 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:12:59 compute-0 podman[203687]: time="2025-12-09T11:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:12:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:12:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Dec  9 11:13:00 compute-0 nova_compute[189493]: 2025-12-09 11:13:00.624 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:01 compute-0 openstack_network_exporter[205823]: ERROR   11:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:13:01 compute-0 openstack_network_exporter[205823]: ERROR   11:13:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:13:01 compute-0 openstack_network_exporter[205823]: ERROR   11:13:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:13:01 compute-0 openstack_network_exporter[205823]: ERROR   11:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:13:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:13:01 compute-0 openstack_network_exporter[205823]: ERROR   11:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:13:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:13:02 compute-0 nova_compute[189493]: 2025-12-09 11:13:02.883 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:04 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:04.399 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 11:13:04 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:04.400 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  9 11:13:04 compute-0 nova_compute[189493]: 2025-12-09 11:13:04.405 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:05 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:05.403 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:13:05 compute-0 nova_compute[189493]: 2025-12-09 11:13:05.627 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:06 compute-0 podman[251208]: 2025-12-09 11:13:06.97689641 +0000 UTC m=+0.119328385 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd, org.label-schema.build-date=20251202, tcib_managed=true, container_name=multipathd, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  9 11:13:07 compute-0 nova_compute[189493]: 2025-12-09 11:13:07.886 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:10 compute-0 nova_compute[189493]: 2025-12-09 11:13:10.631 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:10 compute-0 podman[251227]: 2025-12-09 11:13:10.929384454 +0000 UTC m=+0.081510593 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 11:13:12 compute-0 nova_compute[189493]: 2025-12-09 11:13:12.890 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:14 compute-0 podman[251252]: 2025-12-09 11:13:14.869419218 +0000 UTC m=+0.145179291 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3)
Dec  9 11:13:14 compute-0 podman[251251]: 2025-12-09 11:13:14.873015353 +0000 UTC m=+0.155644148 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=edpm, io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, container_name=kepler, release-0.7.12=, vcs-type=git, version=9.4, com.redhat.component=ubi9-container)
Dec  9 11:13:15 compute-0 nova_compute[189493]: 2025-12-09 11:13:15.634 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:16 compute-0 podman[251288]: 2025-12-09 11:13:16.974272655 +0000 UTC m=+0.118711918 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, config_id=ovn_metadata_agent)
Dec  9 11:13:17 compute-0 podman[251289]: 2025-12-09 11:13:17.00347245 +0000 UTC m=+0.142411857 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251125, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Dec  9 11:13:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:17.012 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:13:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:17.013 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:13:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:13:17.014 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:13:17 compute-0 nova_compute[189493]: 2025-12-09 11:13:17.893 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:20 compute-0 nova_compute[189493]: 2025-12-09 11:13:20.637 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:22 compute-0 nova_compute[189493]: 2025-12-09 11:13:22.896 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:24 compute-0 podman[251326]: 2025-12-09 11:13:24.96516211 +0000 UTC m=+0.102684055 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, distribution-scope=public, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., version=9.6, architecture=x86_64)
Dec  9 11:13:24 compute-0 podman[251327]: 2025-12-09 11:13:24.986815553 +0000 UTC m=+0.126385992 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 11:13:25 compute-0 podman[251328]: 2025-12-09 11:13:25.008990831 +0000 UTC m=+0.145880059 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Dec  9 11:13:25 compute-0 nova_compute[189493]: 2025-12-09 11:13:25.640 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:27 compute-0 nova_compute[189493]: 2025-12-09 11:13:27.899 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:29 compute-0 podman[203687]: time="2025-12-09T11:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:13:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:13:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Dec  9 11:13:30 compute-0 nova_compute[189493]: 2025-12-09 11:13:30.644 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:31 compute-0 openstack_network_exporter[205823]: ERROR   11:13:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:13:31 compute-0 openstack_network_exporter[205823]: ERROR   11:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:13:31 compute-0 openstack_network_exporter[205823]: ERROR   11:13:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:13:31 compute-0 openstack_network_exporter[205823]: ERROR   11:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:13:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:13:31 compute-0 openstack_network_exporter[205823]: ERROR   11:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:13:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:13:32 compute-0 nova_compute[189493]: 2025-12-09 11:13:32.904 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:34 compute-0 ovn_controller[97780]: 2025-12-09T11:13:34Z|00065|memory_trim|INFO|Detected inactivity (last active 30003 ms ago): trimming memory
Dec  9 11:13:35 compute-0 nova_compute[189493]: 2025-12-09 11:13:35.646 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:37 compute-0 nova_compute[189493]: 2025-12-09 11:13:37.907 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:37 compute-0 podman[251393]: 2025-12-09 11:13:37.991192549 +0000 UTC m=+0.136244744 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Dec  9 11:13:40 compute-0 nova_compute[189493]: 2025-12-09 11:13:40.648 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:41 compute-0 podman[251413]: 2025-12-09 11:13:41.951872038 +0000 UTC m=+0.089539906 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 11:13:42 compute-0 nova_compute[189493]: 2025-12-09 11:13:42.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:13:42 compute-0 nova_compute[189493]: 2025-12-09 11:13:42.910 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:44 compute-0 nova_compute[189493]: 2025-12-09 11:13:44.155 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:13:44 compute-0 nova_compute[189493]: 2025-12-09 11:13:44.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:13:45 compute-0 nova_compute[189493]: 2025-12-09 11:13:45.063 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:13:45 compute-0 nova_compute[189493]: 2025-12-09 11:13:45.063 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:13:45 compute-0 nova_compute[189493]: 2025-12-09 11:13:45.650 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:45 compute-0 podman[251438]: 2025-12-09 11:13:45.952910129 +0000 UTC m=+0.102702944 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.expose-services=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.4, io.buildah.version=1.29.0, release=1214.1726694543, distribution-scope=public, name=ubi9, io.openshift.tags=base rhel9)
Dec  9 11:13:45 compute-0 podman[251439]: 2025-12-09 11:13:45.956930216 +0000 UTC m=+0.090607594 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=edpm, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Dec  9 11:13:47 compute-0 nova_compute[189493]: 2025-12-09 11:13:47.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:13:47 compute-0 nova_compute[189493]: 2025-12-09 11:13:47.913 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:47 compute-0 podman[251474]: 2025-12-09 11:13:47.985081129 +0000 UTC m=+0.132308780 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Dec  9 11:13:47 compute-0 podman[251475]: 2025-12-09 11:13:47.987686528 +0000 UTC m=+0.118771710 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42)
Dec  9 11:13:50 compute-0 nova_compute[189493]: 2025-12-09 11:13:50.653 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:50 compute-0 nova_compute[189493]: 2025-12-09 11:13:50.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:13:51 compute-0 nova_compute[189493]: 2025-12-09 11:13:51.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:13:52 compute-0 nova_compute[189493]: 2025-12-09 11:13:52.918 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.172 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.173 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.173 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.174 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.610 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.611 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5383MB free_disk=72.17579650878906GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.612 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:13:53 compute-0 nova_compute[189493]: 2025-12-09 11:13:53.612 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:13:55 compute-0 nova_compute[189493]: 2025-12-09 11:13:55.656 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:55 compute-0 podman[251516]: 2025-12-09 11:13:55.975352068 +0000 UTC m=+0.117171829 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 11:13:55 compute-0 podman[251515]: 2025-12-09 11:13:55.976453628 +0000 UTC m=+0.125960192 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, architecture=x86_64, config_id=edpm, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, vcs-type=git, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container)
Dec  9 11:13:56 compute-0 podman[251517]: 2025-12-09 11:13:56.007696416 +0000 UTC m=+0.153713947 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  9 11:13:57 compute-0 nova_compute[189493]: 2025-12-09 11:13:57.922 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:13:59 compute-0 podman[203687]: time="2025-12-09T11:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:13:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:13:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4336 "" "Go-http-client/1.1"
Dec  9 11:14:00 compute-0 nova_compute[189493]: 2025-12-09 11:14:00.662 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:01 compute-0 openstack_network_exporter[205823]: ERROR   11:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:14:01 compute-0 openstack_network_exporter[205823]: ERROR   11:14:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:14:01 compute-0 openstack_network_exporter[205823]: ERROR   11:14:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:14:01 compute-0 openstack_network_exporter[205823]: ERROR   11:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:14:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:14:01 compute-0 openstack_network_exporter[205823]: ERROR   11:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:14:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:14:01 compute-0 nova_compute[189493]: 2025-12-09 11:14:01.875 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:14:01 compute-0 nova_compute[189493]: 2025-12-09 11:14:01.876 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:14:02 compute-0 nova_compute[189493]: 2025-12-09 11:14:02.276 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:14:02 compute-0 nova_compute[189493]: 2025-12-09 11:14:02.461 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:14:02 compute-0 nova_compute[189493]: 2025-12-09 11:14:02.463 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:14:02 compute-0 nova_compute[189493]: 2025-12-09 11:14:02.464 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 8.852s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:14:02 compute-0 nova_compute[189493]: 2025-12-09 11:14:02.930 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.465 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.465 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.465 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.547 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.547 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.548 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:14:04 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:04.897 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 11:14:04 compute-0 nova_compute[189493]: 2025-12-09 11:14:04.899 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:04 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:04.899 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  9 11:14:05 compute-0 nova_compute[189493]: 2025-12-09 11:14:05.664 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:07 compute-0 nova_compute[189493]: 2025-12-09 11:14:07.931 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:08 compute-0 podman[251580]: 2025-12-09 11:14:08.989095629 +0000 UTC m=+0.122836790 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=multipathd)
Dec  9 11:14:10 compute-0 nova_compute[189493]: 2025-12-09 11:14:10.667 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:12 compute-0 podman[251600]: 2025-12-09 11:14:12.925713961 +0000 UTC m=+0.080131334 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 11:14:12 compute-0 nova_compute[189493]: 2025-12-09 11:14:12.937 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:14 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:14.903 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:14:15 compute-0 nova_compute[189493]: 2025-12-09 11:14:15.671 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:16 compute-0 podman[251623]: 2025-12-09 11:14:16.936296056 +0000 UTC m=+0.091748117 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, managed_by=edpm_ansible, vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vendor=Red Hat, Inc., architecture=x86_64, name=ubi9, version=9.4, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Dec  9 11:14:16 compute-0 podman[251624]: 2025-12-09 11:14:16.948927365 +0000 UTC m=+0.102533168 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Dec  9 11:14:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:17.014 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:14:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:17.014 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:14:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:14:17.015 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:14:17 compute-0 nova_compute[189493]: 2025-12-09 11:14:17.938 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:18 compute-0 podman[251662]: 2025-12-09 11:14:18.956499122 +0000 UTC m=+0.109398599 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Dec  9 11:14:18 compute-0 podman[251661]: 2025-12-09 11:14:18.970054197 +0000 UTC m=+0.119057652 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Dec  9 11:14:20 compute-0 nova_compute[189493]: 2025-12-09 11:14:20.674 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:22 compute-0 nova_compute[189493]: 2025-12-09 11:14:22.942 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.301 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.302 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.303 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b800>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.304 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f8a75e1b7d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e19820>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.306 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8080>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8110>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.307 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b1a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.308 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.309 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f8a7854a570>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.308 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb81a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.309 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f8a75eb8050>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.310 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f8a75eb80e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.311 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.310 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f8a75e1b260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f8a75eb8170>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.312 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f8a75e1b290>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.312 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.311 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b320>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.313 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.313 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f8a75e1b2f0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.314 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f8a75e1b350>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.314 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78fa8380>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.314 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.315 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f8a7710f530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.315 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.315 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a7702ebd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.316 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b3e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.316 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f8a78ed1430>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.317 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f8a75e1b3b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.317 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.317 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.318 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f8a75e1b410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.318 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.318 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75eb8440>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a78c21460>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.319 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b4a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bce0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.320 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f8a75eb8410>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f8a75e1be90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.321 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f8a75e1b470>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.320 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b500>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.321 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.322 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f8a75e1b830>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.322 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.322 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f8a75e1b4d0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.323 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.323 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f8a75e1bad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.324 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.323 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b560>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.324 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bd70>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bdd0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1be30>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.325 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bf20>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.325 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f8a75e1b530>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.326 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.326 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1b7a0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.326 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f8a75e1bd40>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f8a75e1bda0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f8a75e1be00>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f8a75e1bfb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f8a786b36e0>] with cache [{}], pollster history [{'network.incoming.bytes': [], 'disk.device.capacity': [], 'network.outgoing.packets': [], 'network.outgoing.packets.drop': [], 'disk.device.read.bytes': [], 'network.outgoing.packets.error': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': [], 'cpu': [], 'disk.device.allocation': [], 'disk.device.write.bytes': [], 'disk.device.write.latency': [], 'power.state': [], 'network.outgoing.bytes': [], 'disk.device.write.requests': [], 'network.incoming.bytes.delta': [], 'disk.ephemeral.size': [], 'network.incoming.bytes.rate': [], 'disk.root.size': [], 'network.incoming.packets': [], 'network.incoming.packets.drop': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.327 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.328 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f8a75e1bef0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.328 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f8a75e1b770>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.329 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f8a75e1bf80>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f8a76fd3bf0>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.329 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.329 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.330 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.331 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.332 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.333 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.334 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:23 compute-0 ceilometer_agent_compute[200197]: 2025-12-09 11:14:23.335 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Dec  9 11:14:25 compute-0 nova_compute[189493]: 2025-12-09 11:14:25.677 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:26 compute-0 podman[251703]: 2025-12-09 11:14:26.980613049 +0000 UTC m=+0.113456275 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm)
Dec  9 11:14:26 compute-0 podman[251702]: 2025-12-09 11:14:26.992957092 +0000 UTC m=+0.134718092 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=edpm, distribution-scope=public, vendor=Red Hat, Inc.)
Dec  9 11:14:27 compute-0 podman[251704]: 2025-12-09 11:14:27.012914663 +0000 UTC m=+0.141562050 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller)
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:27 compute-0 nova_compute[189493]: [SQL: SELECT 1]
Dec  9 11:14:27 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:27 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openst
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db     raise result
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db [SQL: SELECT 1]
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.Op
Dec  9 11:14:27 compute-0 nova_compute[189493]: voke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db #033[00m
Dec  9 11:14:27 compute-0 rsyslogd[236818]: message too long (15362) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-pack [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:14:27 compute-0 rsyslogd[236818]: message too long (14504) with configured size 8096, begin of message is: 2025-12-09 11:14:27.439 189497 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:14:27 compute-0 nova_compute[189493]: 2025-12-09 11:14:27.944 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:29 compute-0 podman[203687]: time="2025-12-09T11:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:14:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:14:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4339 "" "Go-http-client/1.1"
Dec  9 11:14:30 compute-0 ovn_controller[97780]: 2025-12-09T11:14:30Z|00066|ovsdb_idl|WARN|transaction error: {"details":"Transaction causes multiple rows in \"MAC_Binding\" table to have identical values (lrp-fe50431a-d63b-4063-9df9-acee8df3bf71 and \"192.168.122.80\") for index on columns \"logical_port\" and \"ip\".  First row, with UUID 563226b4-7651-4a2e-af7c-224282404230, was inserted by this transaction.  Second row, with UUID 4b89d1fa-5e73-4206-8c2b-589a01ad469a, existed in the database before this transaction and was not modified by the transaction.","error":"constraint violation"}
Dec  9 11:14:30 compute-0 ovn_controller[97780]: 2025-12-09T11:14:30Z|00067|main|INFO|OVNSB commit failed, force recompute next time.
Dec  9 11:14:30 compute-0 nova_compute[189493]: 2025-12-09 11:14:30.437 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:30 compute-0 nova_compute[189493]: 2025-12-09 11:14:30.541 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:30 compute-0 nova_compute[189493]: 2025-12-09 11:14:30.679 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:31 compute-0 openstack_network_exporter[205823]: ERROR   11:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:14:31 compute-0 openstack_network_exporter[205823]: ERROR   11:14:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:14:31 compute-0 openstack_network_exporter[205823]: ERROR   11:14:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:14:31 compute-0 openstack_network_exporter[205823]: ERROR   11:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:14:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:14:31 compute-0 openstack_network_exporter[205823]: ERROR   11:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:14:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:14:32 compute-0 nova_compute[189493]: 2025-12-09 11:14:32.946 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:33 compute-0 ovn_controller[97780]: 2025-12-09T11:14:33Z|00068|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.682 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:35 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:35 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db     raise result
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Dec  9 11:14:35 compute-0 nova_compute[189493]: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db #033[00m
Dec  9 11:14:36 compute-0 rsyslogd[236818]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:14:36 compute-0 rsyslogd[236818]: message too long (9052) with configured size 8096, begin of message is: 2025-12-09 11:14:35.828 189497 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:14:37 compute-0 nova_compute[189493]: 2025-12-09 11:14:37.951 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:39 compute-0 podman[251769]: 2025-12-09 11:14:39.466387967 +0000 UTC m=+0.091193013 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, container_name=multipathd)
Dec  9 11:14:40 compute-0 nova_compute[189493]: 2025-12-09 11:14:40.685 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:42 compute-0 nova_compute[189493]: 2025-12-09 11:14:42.848 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:42 compute-0 nova_compute[189493]: 2025-12-09 11:14:42.955 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:44 compute-0 nova_compute[189493]: 2025-12-09 11:14:44.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:44 compute-0 nova_compute[189493]: 2025-12-09 11:14:44.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:44 compute-0 podman[251789]: 2025-12-09 11:14:44.883055285 +0000 UTC m=+0.101499753 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.688 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:45 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:45 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db     raise result
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Dec  9 11:14:45 compute-0 nova_compute[189493]: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db #033[00m
Dec  9 11:14:46 compute-0 rsyslogd[236818]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:14:46 compute-0 rsyslogd[236818]: message too long (9052) with configured size 8096, begin of message is: 2025-12-09 11:14:45.801 189497 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:14:46 compute-0 nova_compute[189493]: 2025-12-09 11:14:46.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:47 compute-0 nova_compute[189493]: 2025-12-09 11:14:47.957 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:47 compute-0 podman[251813]: 2025-12-09 11:14:47.979646562 +0000 UTC m=+0.117078230 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, name=ubi9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, architecture=x86_64, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, release=1214.1726694543, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, version=9.4, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Dec  9 11:14:48 compute-0 podman[251814]: 2025-12-09 11:14:48.001134454 +0000 UTC m=+0.132039321 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=edpm, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, tcib_managed=true)
Dec  9 11:14:49 compute-0 nova_compute[189493]: 2025-12-09 11:14:49.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:49 compute-0 podman[251852]: 2025-12-09 11:14:49.985015101 +0000 UTC m=+0.118527677 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_id=edpm, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125)
Dec  9 11:14:49 compute-0 podman[251851]: 2025-12-09 11:14:49.988022089 +0000 UTC m=+0.127475040 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent)
Dec  9 11:14:50 compute-0 nova_compute[189493]: 2025-12-09 11:14:50.692 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:50 compute-0 nova_compute[189493]: 2025-12-09 11:14:50.844 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:52 compute-0 nova_compute[189493]: 2025-12-09 11:14:52.962 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Error during ComputeManager.update_available_resource: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:53 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:53 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task Traceback (most recent call last):
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     task(self, context)
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10584, in update_available_resource
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     compute_nodes_in_db = self._get_compute_nodes_in_db(context,
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 10631, in _get_compute_nodes_in_db
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     return objects.ComputeNodeList.get_all_by_host(context, self.host,
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     result = self.transport._send(
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task     raise result
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 485, in get_all_by_host\n    db_computes = cls._db_compute_node_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/compute_node.py", line 481, in _db_compute_node_get_all_by_host\n    return db.compute_node_get_all_by_host(context, host)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 738, in compute_node_get_all_by_host\n    results = _compute_node_fetchall(context, {"host": host})\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 616, in _compute_node_fetchall\n    with engine.connect() as conn, conn.begin():\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at:
Dec  9 11:14:53 compute-0 nova_compute[189493]: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task #033[00m
Dec  9 11:14:54 compute-0 rsyslogd[236818]: message too long (8132) with configured size 8096, begin of message is: 2025-12-09 11:14:53.894 189497 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:14:55 compute-0 nova_compute[189493]: 2025-12-09 11:14:55.695 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:55 compute-0 nova_compute[189493]: 2025-12-09 11:14:55.899 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:55 compute-0 nova_compute[189493]: 2025-12-09 11:14:55.900 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:14:55 compute-0 nova_compute[189493]: 2025-12-09 11:14:55.900 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:56 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:56 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Dec  9 11:14:56 compute-0 rsyslogd[236818]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db     raise result
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db #033[00m
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Error during ComputeManager._heal_instance_info_cache: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:56 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:56 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n'
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task Traceback (most recent call last):
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     task(self, context)
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 9863, in _heal_instance_info_cache
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     db_instances = objects.InstanceList.get_by_host(
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     result = self.transport._send(
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec  9 11:14:56 compute-0 rsyslogd[236818]: message too long (9052) with configured size 8096, begin of message is: 2025-12-09 11:14:56.065 189497 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task     raise result
Dec  9 11:14:56 compute-0 rsyslogd[236818]: message too long (8558) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1378, in get_by_host\n    db_inst_list = cls._db_instance_get_all_by_host(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 179, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance.py", line 1373, in _db_instance_get_all_by_host\n    return db.instance_get_all_by_host(context, host,\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 241, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 2155, in instance_get_all_by_host\n    instances = query.filter_by(host=host).all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py"
Dec  9 11:14:56 compute-0 nova_compute[189493]: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task #033[00m
Dec  9 11:14:56 compute-0 rsyslogd[236818]: message too long (8622) with configured size 8096, begin of message is: 2025-12-09 11:14:56.068 189497 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:14:57 compute-0 podman[251888]: 2025-12-09 11:14:57.956634506 +0000 UTC m=+0.101378881 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=edpm, architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, maintainer=Red Hat, Inc.)
Dec  9 11:14:57 compute-0 nova_compute[189493]: 2025-12-09 11:14:57.972 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:14:57 compute-0 podman[251889]: 2025-12-09 11:14:57.992159903 +0000 UTC m=+0.128297043 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter)
Dec  9 11:14:58 compute-0 podman[251890]: 2025-12-09 11:14:58.037072706 +0000 UTC m=+0.168058971 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Dec  9 11:14:59 compute-0 podman[203687]: time="2025-12-09T11:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:14:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:14:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4335 "" "Go-http-client/1.1"
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.843 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183#033[00m
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Error during ComputeManager._cleanup_incomplete_migrations: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:59 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:59 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/migration.py", line 266, in get_by_filters\n    db_migrations = db.migration_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3457, in migration_get_all_by_filters\n    return query.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t conn
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task Traceback (most recent call last):
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     task(self, context)
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11186, in _cleanup_incomplete_migrations
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     migrations = objects.MigrationList.get_by_filters(context,
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     result = self.transport._send(
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task     raise result
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/migration.py", line 266, in get_by_filters\n    db_migrations = db.migration_get_all_by_filters(\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3457, in migration_get_all_by_filters\n    return query.all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2773, in all\n    return self._iter().all()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBCon
Dec  9 11:14:59 compute-0 nova_compute[189493]: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task #033[00m
Dec  9 11:15:00 compute-0 rsyslogd[236818]: message too long (8248) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:15:00 compute-0 rsyslogd[236818]: message too long (8312) with configured size 8096, begin of message is: 2025-12-09 11:14:59.906 189497 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.700 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Error during ComputeManager._cleanup_expired_console_auth_tokens: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:15:00 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:15:00 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/console_auth_token.py", line 182, in clean_expired_console_auths\n    db.console_auth_token_destroy_expired(context)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 4886, in console_auth_token_destroy_expired\n    context.session.query(models.ConsoleAuthToken).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3222, in delete\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task Traceback (most recent call last):
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     task(self, context)
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11282, in _cleanup_expired_console_auth_tokens
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     objects.ConsoleAuthToken.clean_expired_console_auths(context)
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     result = self.transport._send(
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task     raise result
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/console_auth_token.py", line 182, in clean_expired_console_auths\n    db.console_auth_token_destroy_expired(context)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 4886, in console_auth_token_destroy_expired\n    context.session.query(models.ConsoleAuthToken).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3222, in delete\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t conne
Dec  9 11:15:00 compute-0 nova_compute[189493]: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task #033[00m
Dec  9 11:15:00 compute-0 rsyslogd[236818]: message too long (8183) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:15:01 compute-0 rsyslogd[236818]: message too long (8247) with configured size 8096, begin of message is: 2025-12-09 11:15:00.987 189497 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:15:01 compute-0 openstack_network_exporter[205823]: ERROR   11:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:15:01 compute-0 openstack_network_exporter[205823]: ERROR   11:15:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:15:01 compute-0 openstack_network_exporter[205823]: ERROR   11:15:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:15:01 compute-0 openstack_network_exporter[205823]: ERROR   11:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:15:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:15:01 compute-0 openstack_network_exporter[205823]: ERROR   11:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:15:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:15:02 compute-0 nova_compute[189493]: 2025-12-09 11:15:02.971 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:05 compute-0 nova_compute[189493]: 2025-12-09 11:15:05.702 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:15:06 compute-0 nova_compute[189493]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:15:06 compute-0 nova_compute[189493]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db     raise result
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n
Dec  9 11:15:06 compute-0 nova_compute[189493]: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db #033[00m
Dec  9 11:15:06 compute-0 rsyslogd[236818]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:15:06 compute-0 rsyslogd[236818]: message too long (9052) with configured size 8096, begin of message is: 2025-12-09 11:15:06.588 189497 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Dec  9 11:15:07 compute-0 nova_compute[189493]: 2025-12-09 11:15:07.974 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:09 compute-0 podman[251955]: 2025-12-09 11:15:09.994564232 +0000 UTC m=+0.129076693 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, container_name=multipathd, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, config_id=multipathd, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Dec  9 11:15:10 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:10.704 106644 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': '56:ee:a7', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3e:d4:ad:27:cb:0f'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43#033[00m
Dec  9 11:15:10 compute-0 nova_compute[189493]: 2025-12-09 11:15:10.706 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:10 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:10.707 106644 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274#033[00m
Dec  9 11:15:12 compute-0 nova_compute[189493]: 2025-12-09 11:15:12.977 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:13 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:13.710 106644 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=9ec27861-bbe8-48fb-b30f-25b967e1609e, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89#033[00m
Dec  9 11:15:15 compute-0 nova_compute[189493]: 2025-12-09 11:15:15.710 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:15 compute-0 nova_compute[189493]: 2025-12-09 11:15:15.805 189497 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status.#033[00m
Dec  9 11:15:15 compute-0 podman[251974]: 2025-12-09 11:15:15.962926013 +0000 UTC m=+0.108606038 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Dec  9 11:15:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:17.014 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:15:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:17.015 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:15:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:15:17.015 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:15:17 compute-0 nova_compute[189493]: 2025-12-09 11:15:17.979 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:18 compute-0 podman[251998]: 2025-12-09 11:15:18.969393076 +0000 UTC m=+0.109118111 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, release=1214.1726694543, io.openshift.expose-services=, name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, version=9.4, config_id=edpm, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, container_name=kepler, vcs-type=git, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30)
Dec  9 11:15:18 compute-0 podman[251999]: 2025-12-09 11:15:18.988722221 +0000 UTC m=+0.136914968 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Dec  9 11:15:20 compute-0 nova_compute[189493]: 2025-12-09 11:15:20.712 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:20 compute-0 podman[252036]: 2025-12-09 11:15:20.997376357 +0000 UTC m=+0.143246192 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent)
Dec  9 11:15:21 compute-0 podman[252037]: 2025-12-09 11:15:21.019871605 +0000 UTC m=+0.150599525 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251125, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=edpm, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Dec  9 11:15:22 compute-0 nova_compute[189493]: 2025-12-09 11:15:22.982 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:24 compute-0 nova_compute[189493]: 2025-12-09 11:15:24.989 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:24 compute-0 nova_compute[189493]: 2025-12-09 11:15:24.990 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145#033[00m
Dec  9 11:15:25 compute-0 nova_compute[189493]: 2025-12-09 11:15:25.110 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154#033[00m
Dec  9 11:15:25 compute-0 nova_compute[189493]: 2025-12-09 11:15:25.715 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:27 compute-0 nova_compute[189493]: 2025-12-09 11:15:27.985 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:28 compute-0 podman[252073]: 2025-12-09 11:15:28.968590672 +0000 UTC m=+0.109113861 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Dec  9 11:15:28 compute-0 podman[252072]: 2025-12-09 11:15:28.969155387 +0000 UTC m=+0.118896027 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_id=edpm, maintainer=Red Hat, Inc., managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Dec  9 11:15:29 compute-0 podman[252074]: 2025-12-09 11:15:29.006223495 +0000 UTC m=+0.136278770 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Dec  9 11:15:29 compute-0 podman[203687]: time="2025-12-09T11:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:15:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:15:29 compute-0 podman[203687]: @ - - [09/Dec/2025:11:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4332 "" "Go-http-client/1.1"
Dec  9 11:15:30 compute-0 nova_compute[189493]: 2025-12-09 11:15:30.719 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:31 compute-0 openstack_network_exporter[205823]: ERROR   11:15:31 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:15:31 compute-0 openstack_network_exporter[205823]: ERROR   11:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:15:31 compute-0 openstack_network_exporter[205823]: ERROR   11:15:31 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:15:31 compute-0 openstack_network_exporter[205823]: ERROR   11:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:15:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:15:31 compute-0 openstack_network_exporter[205823]: ERROR   11:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:15:31 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:15:32 compute-0 nova_compute[189493]: 2025-12-09 11:15:32.989 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:35 compute-0 nova_compute[189493]: 2025-12-09 11:15:35.723 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:37 compute-0 nova_compute[189493]: 2025-12-09 11:15:37.991 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:40 compute-0 nova_compute[189493]: 2025-12-09 11:15:40.727 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:40 compute-0 podman[252146]: 2025-12-09 11:15:40.972421187 +0000 UTC m=+0.115049457 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=multipathd, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, container_name=multipathd, maintainer=OpenStack Kubernetes Operator team)
Dec  9 11:15:42 compute-0 nova_compute[189493]: 2025-12-09 11:15:42.995 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:44 compute-0 nova_compute[189493]: 2025-12-09 11:15:44.957 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:44 compute-0 nova_compute[189493]: 2025-12-09 11:15:44.958 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:45 compute-0 nova_compute[189493]: 2025-12-09 11:15:45.730 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:45 compute-0 nova_compute[189493]: 2025-12-09 11:15:45.840 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:46 compute-0 podman[252166]: 2025-12-09 11:15:46.97417099 +0000 UTC m=+0.119957615 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=edpm, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Dec  9 11:15:48 compute-0 nova_compute[189493]: 2025-12-09 11:15:47.998 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:48 compute-0 nova_compute[189493]: 2025-12-09 11:15:48.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:49 compute-0 nova_compute[189493]: 2025-12-09 11:15:49.836 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:49 compute-0 podman[252190]: 2025-12-09 11:15:49.963998448 +0000 UTC m=+0.100019184 container health_status 8ad198c17f1da12dc50d5e17562d0139fb2a2f84db056ee9551dbf4f34c4cb9d (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, vendor=Red Hat, Inc., name=ubi9, com.redhat.component=ubi9-container, config_id=edpm, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, version=9.4, config_data={'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'privileged': 'true', 'restart': 'always', 'ports': ['8888:8888'], 'net': 'host', 'command': '-v=2', 'recreate': True, 'environment': {'ENABLE_GPU': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_VM_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'test': '/openstack/healthcheck kepler', 'mount': '/var/lib/openstack/healthchecks/kepler'}, 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=)
Dec  9 11:15:49 compute-0 podman[252191]: 2025-12-09 11:15:49.973100086 +0000 UTC m=+0.097467607 container health_status ceb1c84a2b093143b9383b7e11364d7e851348d724743a0cd9ce4fd0c7070c92 (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'privileged': 'true', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck ipmi', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi'}, 'volumes': ['/var/lib/openstack/config/telemetry-power-monitoring:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer-agent-ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=edpm, container_name=ceilometer_agent_ipmi)
Dec  9 11:15:50 compute-0 nova_compute[189493]: 2025-12-09 11:15:50.240 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:50 compute-0 nova_compute[189493]: 2025-12-09 11:15:50.733 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:51 compute-0 podman[252230]: 2025-12-09 11:15:51.964935292 +0000 UTC m=+0.102594271 container health_status 8f562587c42532f877bd4ac5090cf2d81dd9415b6201e22f74972e6d6b9e9403 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': '0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/config-data/ansible-generated/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, tcib_managed=true)
Dec  9 11:15:52 compute-0 podman[252231]: 2025-12-09 11:15:52.000410788 +0000 UTC m=+0.132562083 container health_status b432835229990b9e7cd237d75f8273b15e565fca524d4ea9a7c1f1bf3c773614 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=3a7876c5b6a4ff2e2bc50e11e9db5f42, config_data={'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'user': 'ceilometer', 'restart': 'always', 'command': 'kolla_start', 'security_opt': 'label:type:ceilometer_polling_t', 'net': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck compute', 'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute'}, 'volumes': ['/var/lib/openstack/config/telemetry:/var/lib/openstack/config/:z', '/var/lib/openstack/config/telemetry/ceilometer-agent-compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/config/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251125, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=edpm, managed_by=edpm_ansible)
Dec  9 11:15:52 compute-0 nova_compute[189493]: 2025-12-09 11:15:52.843 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:53 compute-0 nova_compute[189493]: 2025-12-09 11:15:53.000 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.737 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.842 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.875 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.876 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.876 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:15:55 compute-0 nova_compute[189493]: 2025-12-09 11:15:55.877 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861#033[00m
Dec  9 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.355 189497 WARNING nova.virt.libvirt.driver [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.#033[00m
Dec  9 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.357 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5384MB free_disk=72.17571640014648GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034#033[00m
Dec  9 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.357 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.358 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.821 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057#033[00m
Dec  9 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.822 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066#033[00m
Dec  9 11:15:56 compute-0 nova_compute[189493]: 2025-12-09 11:15:56.915 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing inventories for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804#033[00m
Dec  9 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.076 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating ProviderTree inventory for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768#033[00m
Dec  9 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.077 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Updating inventory in ProviderTree for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176#033[00m
Dec  9 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.099 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing aggregate associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813#033[00m
Dec  9 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.145 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Refreshing trait associations for resource provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0, traits: COMPUTE_STORAGE_BUS_SATA,COMPUTE_NET_VIF_MODEL_PCNET,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_SSE,HW_CPU_X86_AMD_SVM,HW_CPU_X86_SSE4A,COMPUTE_STORAGE_BUS_FDC,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_STORAGE_BUS_VIRTIO,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_SSE42,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,HW_CPU_X86_BMI,HW_CPU_X86_BMI2,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_AVX,COMPUTE_NET_VIF_MODEL_VMXNET3,HW_CPU_X86_SHA,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_AESNI,HW_CPU_X86_CLMUL,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_NET_VIF_MODEL_VIRTIO,HW_CPU_X86_SSSE3,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VIRTIO,HW_CPU_X86_SVM,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_DEVICE_TAGGING,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,HW_CPU_X86_F16C,COMPUTE_STORAGE_BUS_USB,HW_CPU_X86_AVX2,COMPUTE_NODE,COMPUTE_TRUSTED_CERTS,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_NET_ATTACH_INTERFACE,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_VIF_MODEL_E1000,COMPUTE_GRAPHICS_MODEL_CIRRUS,HW_CPU_X86_SSE2,COMPUTE_RESCUE_BFV,COMPUTE_IMAGE_TYPE_AMI,HW_CPU_X86_FMA3,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_VOLUME_ATTACH_WITH_TAG,COMPUTE_ACCELERATORS,HW_CPU_X86_MMX,COMPUTE_SECURITY_TPM_2_0,COMPUTE_STORAGE_BUS_SCSI,HW_CPU_X86_SSE41,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_GRAPHICS_MODEL_VGA,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_GRAPHICS_MODEL_BOCHS _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825#033[00m
Dec  9 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.180 189497 DEBUG nova.compute.provider_tree [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed in ProviderTree for provider: cdc1168d-33c9-4d2c-8f23-1b695a68afd0 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180#033[00m
Dec  9 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.204 189497 DEBUG nova.scheduler.client.report [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Inventory has not changed for provider cdc1168d-33c9-4d2c-8f23-1b695a68afd0 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940#033[00m
Dec  9 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.207 189497 DEBUG nova.compute.resource_tracker [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995#033[00m
Dec  9 11:15:57 compute-0 nova_compute[189493]: 2025-12-09 11:15:57.208 189497 DEBUG oslo_concurrency.lockutils [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.851s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:15:58 compute-0 nova_compute[189493]: 2025-12-09 11:15:58.004 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.209 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.210 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858#033[00m
Dec  9 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.211 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862#033[00m
Dec  9 11:15:59 compute-0 podman[203687]: time="2025-12-09T11:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Dec  9 11:15:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28290 "" "Go-http-client/1.1"
Dec  9 11:15:59 compute-0 podman[203687]: @ - - [09/Dec/2025:11:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4334 "" "Go-http-client/1.1"
Dec  9 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.782 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944#033[00m
Dec  9 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.841 189497 DEBUG oslo_service.periodic_task [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210#033[00m
Dec  9 11:15:59 compute-0 nova_compute[189493]: 2025-12-09 11:15:59.842 189497 DEBUG nova.compute.manager [None req-ebadb1bc-7159-418c-bcf2-12c4a3f88381 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477#033[00m
Dec  9 11:15:59 compute-0 podman[252268]: 2025-12-09 11:15:59.970151356 +0000 UTC m=+0.102501529 container health_status d3a438131bb4ae6fd62d2e1493edbbbd51d1b8d6cbe1e9243f414a3aa421452b (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9100:9100'], 'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal'}, 'healthcheck': {'test': '/openstack/healthcheck node_exporter', 'mount': '/var/lib/openstack/healthchecks/node_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=edpm, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Dec  9 11:15:59 compute-0 podman[252267]: 2025-12-09 11:15:59.979331566 +0000 UTC m=+0.116795842 container health_status 5da5cd4e36e0bba48fb617392bc8983ed1dbced7e4599ef74bb3327a2d50468d (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, distribution-scope=public, architecture=x86_64, maintainer=Red Hat, Inc., config_id=edpm, release=1755695350, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'restart': 'always', 'recreate': True, 'privileged': True, 'ports': ['9105:9105'], 'command': [], 'net': 'host', 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml'}, 'healthcheck': {'test': '/openstack/healthcheck openstack-netwo', 'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal)
Dec  9 11:16:00 compute-0 podman[252269]: 2025-12-09 11:16:00.026910419 +0000 UTC m=+0.153599104 container health_status e0a077177b2f078df1f170a6e5c0e8e08d4365b999ec0c487047ed6ab628f3d6 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Dec  9 11:16:00 compute-0 nova_compute[189493]: 2025-12-09 11:16:00.739 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:16:01 compute-0 openstack_network_exporter[205823]: ERROR   11:16:01 appctl.go:131: Failed to prepare call to ovsdb-server: no control socket files found for the ovs db server
Dec  9 11:16:01 compute-0 openstack_network_exporter[205823]: ERROR   11:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:16:01 compute-0 openstack_network_exporter[205823]: ERROR   11:16:01 appctl.go:144: Failed to get PID for ovn-northd: no control socket files found for ovn-northd
Dec  9 11:16:01 compute-0 openstack_network_exporter[205823]: ERROR   11:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Dec  9 11:16:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:16:01 compute-0 openstack_network_exporter[205823]: ERROR   11:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Dec  9 11:16:01 compute-0 openstack_network_exporter[205823]: 
Dec  9 11:16:03 compute-0 nova_compute[189493]: 2025-12-09 11:16:03.006 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:16:04 compute-0 systemd-logind[806]: New session 32 of user zuul.
Dec  9 11:16:04 compute-0 systemd[1]: Started Session 32 of User zuul.
Dec  9 11:16:05 compute-0 nova_compute[189493]: 2025-12-09 11:16:05.741 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:16:08 compute-0 nova_compute[189493]: 2025-12-09 11:16:08.008 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:16:10 compute-0 ovs-vsctl[252509]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Dec  9 11:16:10 compute-0 nova_compute[189493]: 2025-12-09 11:16:10.744 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:16:11 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 252362 (sos)
Dec  9 11:16:11 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Dec  9 11:16:11 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Dec  9 11:16:11 compute-0 podman[252556]: 2025-12-09 11:16:11.78601611 +0000 UTC m=+0.137629096 container health_status 0391d8911d61abd7376f1f93f329cadfe8d3add845c9e6f46fc2c3dfbcc4f02a (image=quay.io/podified-antelope-centos9/openstack-multipathd:current-podified, name=multipathd, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=multipathd, container_name=multipathd, org.label-schema.license=GPLv2, config_data={'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/multipathd', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-multipathd:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/var/lib/kolla/config_files/multipathd.json:/var/lib/kolla/config_files/config.json:ro', '/dev:/dev', '/run/udev:/run/udev', '/sys:/sys', '/lib/modules:/lib/modules:ro', '/etc/iscsi:/etc/iscsi:ro', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath:z', '/etc/multipath.conf:/etc/multipath.conf:ro', '/var/lib/openstack/healthchecks/multipathd:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Dec  9 11:16:12 compute-0 virtqemud[189118]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Dec  9 11:16:12 compute-0 virtqemud[189118]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Dec  9 11:16:12 compute-0 virtqemud[189118]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Dec  9 11:16:13 compute-0 nova_compute[189493]: 2025-12-09 11:16:13.010 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:16:15 compute-0 nova_compute[189493]: 2025-12-09 11:16:15.747 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
Dec  9 11:16:16 compute-0 systemd[1]: Starting Hostname Service...
Dec  9 11:16:16 compute-0 systemd[1]: Started Hostname Service.
Dec  9 11:16:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:16:17.016 106644 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404#033[00m
Dec  9 11:16:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:16:17.016 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409#033[00m
Dec  9 11:16:17 compute-0 ovn_metadata_agent[106639]: 2025-12-09 11:16:17.017 106644 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423#033[00m
Dec  9 11:16:17 compute-0 podman[253142]: 2025-12-09 11:16:17.528085691 +0000 UTC m=+0.090137125 container health_status 8508a94dacd5acdb5dbf860f4282331529be5c86ebd3e90b10e1dde8bc5013e9 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'restart': 'always', 'recreate': True, 'user': 'root', 'privileged': True, 'ports': ['9882:9882'], 'net': 'host', 'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'CONTAINER_HOST': 'unix:///run/podman/podman.sock'}, 'healthcheck': {'test': '/openstack/healthcheck podman_exporter', 'mount': '/var/lib/openstack/healthchecks/podman_exporter'}, 'volumes': ['/var/lib/openstack/config/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=edpm, container_name=podman_exporter)
Dec  9 11:16:18 compute-0 nova_compute[189493]: 2025-12-09 11:16:18.012 189497 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263#033[00m
